00:00:00.001 Started by upstream project "autotest-nightly" build number 4307 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3670 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.086 > git --version # 'git version 2.39.2' 00:00:00.086 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.113 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.113 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.691 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.702 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.711 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.711 > git config core.sparsecheckout # timeout=10 00:00:02.720 > git read-tree -mu HEAD # timeout=10 00:00:02.733 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.752 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.752 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.930 [Pipeline] Start of Pipeline 00:00:02.949 [Pipeline] library 00:00:02.951 Loading library shm_lib@master 00:00:02.951 Library shm_lib@master is cached. Copying from home. 00:00:02.978 [Pipeline] node 00:00:02.992 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.994 [Pipeline] { 00:00:03.004 [Pipeline] catchError 00:00:03.006 [Pipeline] { 00:00:03.019 [Pipeline] wrap 00:00:03.031 [Pipeline] { 00:00:03.040 [Pipeline] stage 00:00:03.042 [Pipeline] { (Prologue) 00:00:03.061 [Pipeline] echo 00:00:03.062 Node: VM-host-WFP7 00:00:03.069 [Pipeline] cleanWs 00:00:03.079 [WS-CLEANUP] Deleting project workspace... 00:00:03.079 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.085 [WS-CLEANUP] done 00:00:03.275 [Pipeline] setCustomBuildProperty 00:00:03.342 [Pipeline] httpRequest 00:00:03.663 [Pipeline] echo 00:00:03.665 Sorcerer 10.211.164.20 is alive 00:00:03.674 [Pipeline] retry 00:00:03.676 [Pipeline] { 00:00:03.692 [Pipeline] httpRequest 00:00:03.697 HttpMethod: GET 00:00:03.698 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.698 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.699 Response Code: HTTP/1.1 200 OK 00:00:03.700 Success: Status code 200 is in the accepted range: 200,404 00:00:03.700 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.853 [Pipeline] } 00:00:03.877 [Pipeline] // retry 00:00:03.883 [Pipeline] sh 00:00:04.163 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.181 [Pipeline] httpRequest 00:00:04.490 [Pipeline] echo 00:00:04.492 Sorcerer 10.211.164.20 is alive 00:00:04.500 [Pipeline] retry 00:00:04.502 [Pipeline] { 00:00:04.514 [Pipeline] httpRequest 00:00:04.519 HttpMethod: GET 00:00:04.520 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:04.521 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:04.521 Response Code: HTTP/1.1 200 OK 00:00:04.522 Success: Status code 200 is in the accepted range: 200,404 00:00:04.523 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:32.169 [Pipeline] } 00:00:32.192 [Pipeline] // retry 00:00:32.201 [Pipeline] sh 00:00:32.493 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:35.043 [Pipeline] sh 00:00:35.329 + git -C spdk log --oneline -n5 00:00:35.329 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:00:35.329 5592070b3 doc: update nvmf_tracing.md 00:00:35.329 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:00:35.329 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:00:35.329 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:00:35.351 [Pipeline] writeFile 00:00:35.369 [Pipeline] sh 00:00:35.661 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:35.675 [Pipeline] sh 00:00:35.961 + cat autorun-spdk.conf 00:00:35.961 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.961 SPDK_RUN_ASAN=1 00:00:35.961 SPDK_RUN_UBSAN=1 00:00:35.961 SPDK_TEST_RAID=1 00:00:35.961 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.969 RUN_NIGHTLY=1 00:00:35.971 [Pipeline] } 00:00:35.988 [Pipeline] // stage 00:00:36.004 [Pipeline] stage 00:00:36.006 [Pipeline] { (Run VM) 00:00:36.021 [Pipeline] sh 00:00:36.313 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:36.313 + echo 'Start stage prepare_nvme.sh' 00:00:36.313 Start stage prepare_nvme.sh 00:00:36.313 + [[ -n 5 ]] 00:00:36.313 + disk_prefix=ex5 00:00:36.313 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:36.313 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:36.313 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:36.313 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.313 ++ SPDK_RUN_ASAN=1 00:00:36.313 ++ SPDK_RUN_UBSAN=1 00:00:36.313 ++ SPDK_TEST_RAID=1 00:00:36.313 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.313 ++ RUN_NIGHTLY=1 00:00:36.313 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:36.313 + nvme_files=() 00:00:36.313 + declare -A nvme_files 00:00:36.313 + backend_dir=/var/lib/libvirt/images/backends 00:00:36.313 + nvme_files['nvme.img']=5G 00:00:36.313 + nvme_files['nvme-cmb.img']=5G 00:00:36.313 + nvme_files['nvme-multi0.img']=4G 00:00:36.313 + nvme_files['nvme-multi1.img']=4G 00:00:36.313 + nvme_files['nvme-multi2.img']=4G 00:00:36.313 + nvme_files['nvme-openstack.img']=8G 00:00:36.313 + nvme_files['nvme-zns.img']=5G 00:00:36.313 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:36.313 + (( SPDK_TEST_FTL == 1 )) 00:00:36.313 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:36.313 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:36.313 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:36.313 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:36.313 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:36.313 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:36.313 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:36.313 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.313 + for nvme in "${!nvme_files[@]}" 00:00:36.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:37.256 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.256 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:37.256 + echo 'End stage prepare_nvme.sh' 00:00:37.256 End stage prepare_nvme.sh 00:00:37.270 [Pipeline] sh 00:00:37.554 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:37.554 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:37.554 00:00:37.554 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:37.554 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:37.554 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:37.554 HELP=0 00:00:37.554 DRY_RUN=0 00:00:37.554 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:37.554 NVME_DISKS_TYPE=nvme,nvme, 00:00:37.554 NVME_AUTO_CREATE=0 00:00:37.554 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:37.554 NVME_CMB=,, 00:00:37.554 NVME_PMR=,, 00:00:37.554 NVME_ZNS=,, 00:00:37.554 NVME_MS=,, 00:00:37.554 NVME_FDP=,, 00:00:37.554 SPDK_VAGRANT_DISTRO=fedora39 00:00:37.554 SPDK_VAGRANT_VMCPU=10 00:00:37.554 SPDK_VAGRANT_VMRAM=12288 00:00:37.554 SPDK_VAGRANT_PROVIDER=libvirt 00:00:37.554 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:37.554 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:37.554 SPDK_OPENSTACK_NETWORK=0 00:00:37.554 VAGRANT_PACKAGE_BOX=0 00:00:37.554 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:37.554 FORCE_DISTRO=true 00:00:37.554 VAGRANT_BOX_VERSION= 00:00:37.554 EXTRA_VAGRANTFILES= 00:00:37.554 NIC_MODEL=virtio 00:00:37.554 00:00:37.554 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:37.554 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:39.464 Bringing machine 'default' up with 'libvirt' provider... 00:00:40.033 ==> default: Creating image (snapshot of base box volume). 00:00:40.033 ==> default: Creating domain with the following settings... 00:00:40.033 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732655277_3355c396823cf8767ebc 00:00:40.033 ==> default: -- Domain type: kvm 00:00:40.033 ==> default: -- Cpus: 10 00:00:40.033 ==> default: -- Feature: acpi 00:00:40.033 ==> default: -- Feature: apic 00:00:40.033 ==> default: -- Feature: pae 00:00:40.033 ==> default: -- Memory: 12288M 00:00:40.033 ==> default: -- Memory Backing: hugepages: 00:00:40.033 ==> default: -- Management MAC: 00:00:40.033 ==> default: -- Loader: 00:00:40.033 ==> default: -- Nvram: 00:00:40.033 ==> default: -- Base box: spdk/fedora39 00:00:40.033 ==> default: -- Storage pool: default 00:00:40.033 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732655277_3355c396823cf8767ebc.img (20G) 00:00:40.033 ==> default: -- Volume Cache: default 00:00:40.033 ==> default: -- Kernel: 00:00:40.033 ==> default: -- Initrd: 00:00:40.033 ==> default: -- Graphics Type: vnc 00:00:40.033 ==> default: -- Graphics Port: -1 00:00:40.033 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.033 ==> default: -- Graphics Password: Not defined 00:00:40.033 ==> default: -- Video Type: cirrus 00:00:40.033 ==> default: -- Video VRAM: 9216 00:00:40.033 ==> default: -- Sound Type: 00:00:40.033 ==> default: -- Keymap: en-us 00:00:40.033 ==> default: -- TPM Path: 00:00:40.033 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.033 ==> default: -- Command line args: 00:00:40.033 ==> default: -> value=-device, 00:00:40.033 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:40.033 ==> default: -> value=-drive, 00:00:40.033 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:40.033 ==> default: -> value=-device, 00:00:40.033 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.033 ==> default: -> value=-device, 00:00:40.033 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:40.033 ==> default: -> value=-drive, 00:00:40.033 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:40.033 ==> default: -> value=-device, 00:00:40.033 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.033 ==> default: -> value=-drive, 00:00:40.033 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:40.033 ==> default: -> value=-device, 00:00:40.033 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.033 ==> default: -> value=-drive, 00:00:40.033 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:40.033 ==> default: -> value=-device, 00:00:40.033 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.294 ==> default: Creating shared folders metadata... 00:00:40.294 ==> default: Starting domain. 00:00:42.219 ==> default: Waiting for domain to get an IP address... 00:00:57.159 ==> default: Waiting for SSH to become available... 00:00:58.540 ==> default: Configuring and enabling network interfaces... 00:01:05.114 default: SSH address: 192.168.121.142:22 00:01:05.114 default: SSH username: vagrant 00:01:05.114 default: SSH auth method: private key 00:01:08.411 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:16.547 ==> default: Mounting SSHFS shared folder... 00:01:18.456 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:18.456 ==> default: Checking Mount.. 00:01:20.362 ==> default: Folder Successfully Mounted! 00:01:20.362 ==> default: Running provisioner: file... 00:01:21.301 default: ~/.gitconfig => .gitconfig 00:01:21.870 00:01:21.870 SUCCESS! 00:01:21.870 00:01:21.870 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:21.870 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.870 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:21.870 00:01:21.879 [Pipeline] } 00:01:21.891 [Pipeline] // stage 00:01:21.897 [Pipeline] dir 00:01:21.897 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:21.898 [Pipeline] { 00:01:21.906 [Pipeline] catchError 00:01:21.907 [Pipeline] { 00:01:21.917 [Pipeline] sh 00:01:22.199 + vagrant ssh-config --host vagrant 00:01:22.199 + sed -ne /^Host/,$p 00:01:22.199 + tee ssh_conf 00:01:24.735 Host vagrant 00:01:24.735 HostName 192.168.121.142 00:01:24.735 User vagrant 00:01:24.735 Port 22 00:01:24.735 UserKnownHostsFile /dev/null 00:01:24.735 StrictHostKeyChecking no 00:01:24.735 PasswordAuthentication no 00:01:24.735 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:24.735 IdentitiesOnly yes 00:01:24.735 LogLevel FATAL 00:01:24.735 ForwardAgent yes 00:01:24.735 ForwardX11 yes 00:01:24.735 00:01:24.750 [Pipeline] withEnv 00:01:24.753 [Pipeline] { 00:01:24.767 [Pipeline] sh 00:01:25.052 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:25.052 source /etc/os-release 00:01:25.052 [[ -e /image.version ]] && img=$(< /image.version) 00:01:25.052 # Minimal, systemd-like check. 00:01:25.052 if [[ -e /.dockerenv ]]; then 00:01:25.052 # Clear garbage from the node's name: 00:01:25.052 # agt-er_autotest_547-896 -> autotest_547-896 00:01:25.052 # $HOSTNAME is the actual container id 00:01:25.052 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:25.052 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:25.052 # We can assume this is a mount from a host where container is running, 00:01:25.052 # so fetch its hostname to easily identify the target swarm worker. 00:01:25.052 container="$(< /etc/hostname) ($agent)" 00:01:25.052 else 00:01:25.052 # Fallback 00:01:25.052 container=$agent 00:01:25.052 fi 00:01:25.052 fi 00:01:25.052 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:25.052 00:01:25.328 [Pipeline] } 00:01:25.344 [Pipeline] // withEnv 00:01:25.353 [Pipeline] setCustomBuildProperty 00:01:25.368 [Pipeline] stage 00:01:25.370 [Pipeline] { (Tests) 00:01:25.387 [Pipeline] sh 00:01:25.669 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.944 [Pipeline] sh 00:01:26.227 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:26.502 [Pipeline] timeout 00:01:26.502 Timeout set to expire in 1 hr 30 min 00:01:26.504 [Pipeline] { 00:01:26.519 [Pipeline] sh 00:01:26.803 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:27.372 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:27.384 [Pipeline] sh 00:01:27.667 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:27.941 [Pipeline] sh 00:01:28.223 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:28.498 [Pipeline] sh 00:01:28.777 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:29.037 ++ readlink -f spdk_repo 00:01:29.037 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:29.037 + [[ -n /home/vagrant/spdk_repo ]] 00:01:29.037 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:29.037 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:29.037 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:29.037 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:29.037 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:29.037 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:29.037 + cd /home/vagrant/spdk_repo 00:01:29.037 + source /etc/os-release 00:01:29.037 ++ NAME='Fedora Linux' 00:01:29.037 ++ VERSION='39 (Cloud Edition)' 00:01:29.037 ++ ID=fedora 00:01:29.037 ++ VERSION_ID=39 00:01:29.037 ++ VERSION_CODENAME= 00:01:29.037 ++ PLATFORM_ID=platform:f39 00:01:29.037 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:29.037 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.037 ++ LOGO=fedora-logo-icon 00:01:29.037 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:29.037 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.037 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:29.037 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.037 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.037 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.037 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:29.037 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.037 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:29.037 ++ SUPPORT_END=2024-11-12 00:01:29.037 ++ VARIANT='Cloud Edition' 00:01:29.037 ++ VARIANT_ID=cloud 00:01:29.037 + uname -a 00:01:29.038 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:29.038 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:29.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:29.605 Hugepages 00:01:29.605 node hugesize free / total 00:01:29.605 node0 1048576kB 0 / 0 00:01:29.605 node0 2048kB 0 / 0 00:01:29.605 00:01:29.605 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.605 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:29.605 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:29.605 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:29.605 + rm -f /tmp/spdk-ld-path 00:01:29.605 + source autorun-spdk.conf 00:01:29.605 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.605 ++ SPDK_RUN_ASAN=1 00:01:29.605 ++ SPDK_RUN_UBSAN=1 00:01:29.605 ++ SPDK_TEST_RAID=1 00:01:29.605 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.605 ++ RUN_NIGHTLY=1 00:01:29.605 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.605 + [[ -n '' ]] 00:01:29.605 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:29.605 + for M in /var/spdk/build-*-manifest.txt 00:01:29.605 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:29.605 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.605 + for M in /var/spdk/build-*-manifest.txt 00:01:29.605 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.605 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.605 + for M in /var/spdk/build-*-manifest.txt 00:01:29.605 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.605 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.605 ++ uname 00:01:29.605 + [[ Linux == \L\i\n\u\x ]] 00:01:29.605 + sudo dmesg -T 00:01:29.864 + sudo dmesg --clear 00:01:29.864 + dmesg_pid=5430 00:01:29.864 + [[ Fedora Linux == FreeBSD ]] 00:01:29.864 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.864 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.864 + sudo dmesg -Tw 00:01:29.864 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.864 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.864 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.864 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.864 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.864 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.865 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.865 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.865 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.865 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.865 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.865 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.865 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.865 21:08:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:29.865 21:08:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.865 21:08:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.865 21:08:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:29.865 21:08:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:29.865 21:08:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:29.865 21:08:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.865 21:08:47 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:29.865 21:08:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:29.865 21:08:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:30.125 21:08:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:30.125 21:08:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:30.125 21:08:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:30.125 21:08:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.125 21:08:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.125 21:08:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.125 21:08:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.125 21:08:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.125 21:08:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.125 21:08:48 -- paths/export.sh@5 -- $ export PATH 00:01:30.125 21:08:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.125 21:08:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:30.125 21:08:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:30.125 21:08:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732655328.XXXXXX 00:01:30.125 21:08:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732655328.mmSjwm 00:01:30.125 21:08:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:30.125 21:08:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:30.125 21:08:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:30.125 21:08:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:30.125 21:08:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.125 21:08:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:30.125 21:08:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:30.125 21:08:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.125 21:08:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:30.125 21:08:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:30.125 21:08:48 -- pm/common@17 -- $ local monitor 00:01:30.125 21:08:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.125 21:08:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.125 21:08:48 -- pm/common@25 -- $ sleep 1 00:01:30.125 21:08:48 -- pm/common@21 -- $ date +%s 00:01:30.125 21:08:48 -- pm/common@21 -- $ date +%s 00:01:30.126 21:08:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732655328 00:01:30.126 21:08:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732655328 00:01:30.126 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732655328_collect-vmstat.pm.log 00:01:30.126 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732655328_collect-cpu-load.pm.log 00:01:31.064 21:08:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:31.064 21:08:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.064 21:08:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.064 21:08:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:31.064 21:08:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.064 Tue Nov 26 09:08:49 PM UTC 2024 00:01:31.064 21:08:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.064 v25.01-pre-271-g2f2acf4eb 00:01:31.064 21:08:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:31.064 21:08:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:31.064 21:08:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:31.064 21:08:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:31.064 21:08:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.064 ************************************ 00:01:31.064 START TEST asan 00:01:31.064 ************************************ 00:01:31.064 using asan 00:01:31.064 21:08:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:31.064 00:01:31.064 real 0m0.000s 00:01:31.064 user 0m0.000s 00:01:31.064 sys 0m0.000s 00:01:31.064 21:08:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:31.064 21:08:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.064 ************************************ 00:01:31.064 END TEST asan 00:01:31.064 ************************************ 00:01:31.064 21:08:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.064 21:08:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.064 21:08:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:31.064 21:08:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:31.064 21:08:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.064 ************************************ 00:01:31.064 START TEST ubsan 00:01:31.064 ************************************ 00:01:31.064 using ubsan 00:01:31.064 21:08:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:31.064 00:01:31.064 real 0m0.000s 00:01:31.064 user 0m0.000s 00:01:31.064 sys 0m0.000s 00:01:31.064 21:08:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:31.064 21:08:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.064 ************************************ 00:01:31.064 END TEST ubsan 00:01:31.064 ************************************ 00:01:31.323 21:08:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.323 21:08:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.323 21:08:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.323 21:08:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.323 21:08:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.323 21:08:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.323 21:08:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.323 21:08:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.323 21:08:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:31.323 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:31.323 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:31.892 Using 'verbs' RDMA provider 00:01:47.738 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:02.634 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:03.492 Creating mk/config.mk...done. 00:02:03.492 Creating mk/cc.flags.mk...done. 00:02:03.492 Type 'make' to build. 00:02:03.492 21:09:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:03.492 21:09:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:03.492 21:09:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:03.492 21:09:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.492 ************************************ 00:02:03.492 START TEST make 00:02:03.492 ************************************ 00:02:03.492 21:09:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:04.063 make[1]: Nothing to be done for 'all'. 00:02:14.052 The Meson build system 00:02:14.052 Version: 1.5.0 00:02:14.052 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.052 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.052 Build type: native build 00:02:14.052 Program cat found: YES (/usr/bin/cat) 00:02:14.052 Project name: DPDK 00:02:14.052 Project version: 24.03.0 00:02:14.052 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.052 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.052 Host machine cpu family: x86_64 00:02:14.052 Host machine cpu: x86_64 00:02:14.052 Message: ## Building in Developer Mode ## 00:02:14.052 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.052 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.052 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.052 Program python3 found: YES (/usr/bin/python3) 00:02:14.052 Program cat found: YES (/usr/bin/cat) 00:02:14.052 Compiler for C supports arguments -march=native: YES 00:02:14.052 Checking for size of "void *" : 8 00:02:14.052 Checking for size of "void *" : 8 (cached) 00:02:14.052 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.052 Library m found: YES 00:02:14.052 Library numa found: YES 00:02:14.052 Has header "numaif.h" : YES 00:02:14.052 Library fdt found: NO 00:02:14.052 Library execinfo found: NO 00:02:14.052 Has header "execinfo.h" : YES 00:02:14.052 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.052 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.052 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.052 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.052 Run-time dependency openssl found: YES 3.1.1 00:02:14.052 Run-time dependency libpcap found: YES 1.10.4 00:02:14.052 Has header "pcap.h" with dependency libpcap: YES 00:02:14.052 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.052 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.052 Compiler for C supports arguments -Wformat: YES 00:02:14.052 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.052 Compiler for C supports arguments -Wformat-security: NO 00:02:14.052 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.052 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.052 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.052 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.052 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.052 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.052 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.052 Compiler for C supports arguments -Wundef: YES 00:02:14.052 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.052 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.052 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.052 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.052 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.052 Program objdump found: YES (/usr/bin/objdump) 00:02:14.052 Compiler for C supports arguments -mavx512f: YES 00:02:14.052 Checking if "AVX512 checking" compiles: YES 00:02:14.052 Fetching value of define "__SSE4_2__" : 1 00:02:14.052 Fetching value of define "__AES__" : 1 00:02:14.052 Fetching value of define "__AVX__" : 1 00:02:14.052 Fetching value of define "__AVX2__" : 1 00:02:14.052 Fetching value of define "__AVX512BW__" : 1 00:02:14.052 Fetching value of define "__AVX512CD__" : 1 00:02:14.052 Fetching value of define "__AVX512DQ__" : 1 00:02:14.052 Fetching value of define "__AVX512F__" : 1 00:02:14.052 Fetching value of define "__AVX512VL__" : 1 00:02:14.052 Fetching value of define "__PCLMUL__" : 1 00:02:14.052 Fetching value of define "__RDRND__" : 1 00:02:14.052 Fetching value of define "__RDSEED__" : 1 00:02:14.052 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.052 Fetching value of define "__znver1__" : (undefined) 00:02:14.052 Fetching value of define "__znver2__" : (undefined) 00:02:14.052 Fetching value of define "__znver3__" : (undefined) 00:02:14.052 Fetching value of define "__znver4__" : (undefined) 00:02:14.052 Library asan found: YES 00:02:14.052 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.052 Message: lib/log: Defining dependency "log" 00:02:14.052 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.052 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.052 Library rt found: YES 00:02:14.052 Checking for function "getentropy" : NO 00:02:14.052 Message: lib/eal: Defining dependency "eal" 00:02:14.052 Message: lib/ring: Defining dependency "ring" 00:02:14.052 Message: lib/rcu: Defining dependency "rcu" 00:02:14.052 Message: lib/mempool: Defining dependency "mempool" 00:02:14.052 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.052 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.052 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.052 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.052 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.052 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.052 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:14.052 Compiler for C supports arguments -mpclmul: YES 00:02:14.052 Compiler for C supports arguments -maes: YES 00:02:14.053 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.053 Compiler for C supports arguments -mavx512bw: YES 00:02:14.053 Compiler for C supports arguments -mavx512dq: YES 00:02:14.053 Compiler for C supports arguments -mavx512vl: YES 00:02:14.053 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.053 Compiler for C supports arguments -mavx2: YES 00:02:14.053 Compiler for C supports arguments -mavx: YES 00:02:14.053 Message: lib/net: Defining dependency "net" 00:02:14.053 Message: lib/meter: Defining dependency "meter" 00:02:14.053 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.053 Message: lib/pci: Defining dependency "pci" 00:02:14.053 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.053 Message: lib/hash: Defining dependency "hash" 00:02:14.053 Message: lib/timer: Defining dependency "timer" 00:02:14.053 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.053 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.053 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.053 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.053 Message: lib/power: Defining dependency "power" 00:02:14.053 Message: lib/reorder: Defining dependency "reorder" 00:02:14.053 Message: lib/security: Defining dependency "security" 00:02:14.053 Has header "linux/userfaultfd.h" : YES 00:02:14.053 Has header "linux/vduse.h" : YES 00:02:14.053 Message: lib/vhost: Defining dependency "vhost" 00:02:14.053 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.053 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.053 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.053 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.053 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.053 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.053 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.053 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.053 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.053 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.053 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.053 Configuring doxy-api-html.conf using configuration 00:02:14.053 Configuring doxy-api-man.conf using configuration 00:02:14.053 Program mandb found: YES (/usr/bin/mandb) 00:02:14.053 Program sphinx-build found: NO 00:02:14.053 Configuring rte_build_config.h using configuration 00:02:14.053 Message: 00:02:14.053 ================= 00:02:14.053 Applications Enabled 00:02:14.053 ================= 00:02:14.053 00:02:14.053 apps: 00:02:14.053 00:02:14.053 00:02:14.053 Message: 00:02:14.053 ================= 00:02:14.053 Libraries Enabled 00:02:14.053 ================= 00:02:14.053 00:02:14.053 libs: 00:02:14.053 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.053 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.053 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.053 00:02:14.053 Message: 00:02:14.053 =============== 00:02:14.053 Drivers Enabled 00:02:14.053 =============== 00:02:14.053 00:02:14.053 common: 00:02:14.053 00:02:14.053 bus: 00:02:14.053 pci, vdev, 00:02:14.053 mempool: 00:02:14.053 ring, 00:02:14.053 dma: 00:02:14.053 00:02:14.053 net: 00:02:14.053 00:02:14.053 crypto: 00:02:14.053 00:02:14.053 compress: 00:02:14.053 00:02:14.053 vdpa: 00:02:14.053 00:02:14.053 00:02:14.053 Message: 00:02:14.053 ================= 00:02:14.053 Content Skipped 00:02:14.053 ================= 00:02:14.053 00:02:14.053 apps: 00:02:14.053 dumpcap: explicitly disabled via build config 00:02:14.053 graph: explicitly disabled via build config 00:02:14.053 pdump: explicitly disabled via build config 00:02:14.053 proc-info: explicitly disabled via build config 00:02:14.053 test-acl: explicitly disabled via build config 00:02:14.053 test-bbdev: explicitly disabled via build config 00:02:14.053 test-cmdline: explicitly disabled via build config 00:02:14.053 test-compress-perf: explicitly disabled via build config 00:02:14.053 test-crypto-perf: explicitly disabled via build config 00:02:14.053 test-dma-perf: explicitly disabled via build config 00:02:14.053 test-eventdev: explicitly disabled via build config 00:02:14.053 test-fib: explicitly disabled via build config 00:02:14.053 test-flow-perf: explicitly disabled via build config 00:02:14.053 test-gpudev: explicitly disabled via build config 00:02:14.053 test-mldev: explicitly disabled via build config 00:02:14.053 test-pipeline: explicitly disabled via build config 00:02:14.053 test-pmd: explicitly disabled via build config 00:02:14.053 test-regex: explicitly disabled via build config 00:02:14.053 test-sad: explicitly disabled via build config 00:02:14.053 test-security-perf: explicitly disabled via build config 00:02:14.053 00:02:14.053 libs: 00:02:14.053 argparse: explicitly disabled via build config 00:02:14.053 metrics: explicitly disabled via build config 00:02:14.053 acl: explicitly disabled via build config 00:02:14.053 bbdev: explicitly disabled via build config 00:02:14.053 bitratestats: explicitly disabled via build config 00:02:14.053 bpf: explicitly disabled via build config 00:02:14.053 cfgfile: explicitly disabled via build config 00:02:14.053 distributor: explicitly disabled via build config 00:02:14.053 efd: explicitly disabled via build config 00:02:14.053 eventdev: explicitly disabled via build config 00:02:14.053 dispatcher: explicitly disabled via build config 00:02:14.053 gpudev: explicitly disabled via build config 00:02:14.053 gro: explicitly disabled via build config 00:02:14.053 gso: explicitly disabled via build config 00:02:14.053 ip_frag: explicitly disabled via build config 00:02:14.053 jobstats: explicitly disabled via build config 00:02:14.053 latencystats: explicitly disabled via build config 00:02:14.053 lpm: explicitly disabled via build config 00:02:14.053 member: explicitly disabled via build config 00:02:14.053 pcapng: explicitly disabled via build config 00:02:14.053 rawdev: explicitly disabled via build config 00:02:14.053 regexdev: explicitly disabled via build config 00:02:14.053 mldev: explicitly disabled via build config 00:02:14.053 rib: explicitly disabled via build config 00:02:14.053 sched: explicitly disabled via build config 00:02:14.053 stack: explicitly disabled via build config 00:02:14.053 ipsec: explicitly disabled via build config 00:02:14.053 pdcp: explicitly disabled via build config 00:02:14.053 fib: explicitly disabled via build config 00:02:14.053 port: explicitly disabled via build config 00:02:14.053 pdump: explicitly disabled via build config 00:02:14.053 table: explicitly disabled via build config 00:02:14.053 pipeline: explicitly disabled via build config 00:02:14.053 graph: explicitly disabled via build config 00:02:14.053 node: explicitly disabled via build config 00:02:14.053 00:02:14.053 drivers: 00:02:14.053 common/cpt: not in enabled drivers build config 00:02:14.053 common/dpaax: not in enabled drivers build config 00:02:14.053 common/iavf: not in enabled drivers build config 00:02:14.053 common/idpf: not in enabled drivers build config 00:02:14.053 common/ionic: not in enabled drivers build config 00:02:14.053 common/mvep: not in enabled drivers build config 00:02:14.053 common/octeontx: not in enabled drivers build config 00:02:14.053 bus/auxiliary: not in enabled drivers build config 00:02:14.053 bus/cdx: not in enabled drivers build config 00:02:14.053 bus/dpaa: not in enabled drivers build config 00:02:14.053 bus/fslmc: not in enabled drivers build config 00:02:14.053 bus/ifpga: not in enabled drivers build config 00:02:14.053 bus/platform: not in enabled drivers build config 00:02:14.053 bus/uacce: not in enabled drivers build config 00:02:14.053 bus/vmbus: not in enabled drivers build config 00:02:14.053 common/cnxk: not in enabled drivers build config 00:02:14.053 common/mlx5: not in enabled drivers build config 00:02:14.053 common/nfp: not in enabled drivers build config 00:02:14.053 common/nitrox: not in enabled drivers build config 00:02:14.053 common/qat: not in enabled drivers build config 00:02:14.053 common/sfc_efx: not in enabled drivers build config 00:02:14.053 mempool/bucket: not in enabled drivers build config 00:02:14.053 mempool/cnxk: not in enabled drivers build config 00:02:14.053 mempool/dpaa: not in enabled drivers build config 00:02:14.053 mempool/dpaa2: not in enabled drivers build config 00:02:14.053 mempool/octeontx: not in enabled drivers build config 00:02:14.053 mempool/stack: not in enabled drivers build config 00:02:14.053 dma/cnxk: not in enabled drivers build config 00:02:14.053 dma/dpaa: not in enabled drivers build config 00:02:14.053 dma/dpaa2: not in enabled drivers build config 00:02:14.053 dma/hisilicon: not in enabled drivers build config 00:02:14.053 dma/idxd: not in enabled drivers build config 00:02:14.053 dma/ioat: not in enabled drivers build config 00:02:14.053 dma/skeleton: not in enabled drivers build config 00:02:14.053 net/af_packet: not in enabled drivers build config 00:02:14.053 net/af_xdp: not in enabled drivers build config 00:02:14.053 net/ark: not in enabled drivers build config 00:02:14.053 net/atlantic: not in enabled drivers build config 00:02:14.053 net/avp: not in enabled drivers build config 00:02:14.053 net/axgbe: not in enabled drivers build config 00:02:14.053 net/bnx2x: not in enabled drivers build config 00:02:14.053 net/bnxt: not in enabled drivers build config 00:02:14.053 net/bonding: not in enabled drivers build config 00:02:14.053 net/cnxk: not in enabled drivers build config 00:02:14.053 net/cpfl: not in enabled drivers build config 00:02:14.053 net/cxgbe: not in enabled drivers build config 00:02:14.053 net/dpaa: not in enabled drivers build config 00:02:14.053 net/dpaa2: not in enabled drivers build config 00:02:14.053 net/e1000: not in enabled drivers build config 00:02:14.053 net/ena: not in enabled drivers build config 00:02:14.053 net/enetc: not in enabled drivers build config 00:02:14.053 net/enetfec: not in enabled drivers build config 00:02:14.053 net/enic: not in enabled drivers build config 00:02:14.053 net/failsafe: not in enabled drivers build config 00:02:14.053 net/fm10k: not in enabled drivers build config 00:02:14.053 net/gve: not in enabled drivers build config 00:02:14.053 net/hinic: not in enabled drivers build config 00:02:14.053 net/hns3: not in enabled drivers build config 00:02:14.053 net/i40e: not in enabled drivers build config 00:02:14.053 net/iavf: not in enabled drivers build config 00:02:14.053 net/ice: not in enabled drivers build config 00:02:14.053 net/idpf: not in enabled drivers build config 00:02:14.053 net/igc: not in enabled drivers build config 00:02:14.053 net/ionic: not in enabled drivers build config 00:02:14.053 net/ipn3ke: not in enabled drivers build config 00:02:14.053 net/ixgbe: not in enabled drivers build config 00:02:14.053 net/mana: not in enabled drivers build config 00:02:14.053 net/memif: not in enabled drivers build config 00:02:14.053 net/mlx4: not in enabled drivers build config 00:02:14.053 net/mlx5: not in enabled drivers build config 00:02:14.053 net/mvneta: not in enabled drivers build config 00:02:14.053 net/mvpp2: not in enabled drivers build config 00:02:14.053 net/netvsc: not in enabled drivers build config 00:02:14.053 net/nfb: not in enabled drivers build config 00:02:14.053 net/nfp: not in enabled drivers build config 00:02:14.053 net/ngbe: not in enabled drivers build config 00:02:14.053 net/null: not in enabled drivers build config 00:02:14.053 net/octeontx: not in enabled drivers build config 00:02:14.053 net/octeon_ep: not in enabled drivers build config 00:02:14.053 net/pcap: not in enabled drivers build config 00:02:14.053 net/pfe: not in enabled drivers build config 00:02:14.053 net/qede: not in enabled drivers build config 00:02:14.053 net/ring: not in enabled drivers build config 00:02:14.053 net/sfc: not in enabled drivers build config 00:02:14.053 net/softnic: not in enabled drivers build config 00:02:14.053 net/tap: not in enabled drivers build config 00:02:14.053 net/thunderx: not in enabled drivers build config 00:02:14.053 net/txgbe: not in enabled drivers build config 00:02:14.053 net/vdev_netvsc: not in enabled drivers build config 00:02:14.053 net/vhost: not in enabled drivers build config 00:02:14.053 net/virtio: not in enabled drivers build config 00:02:14.053 net/vmxnet3: not in enabled drivers build config 00:02:14.053 raw/*: missing internal dependency, "rawdev" 00:02:14.053 crypto/armv8: not in enabled drivers build config 00:02:14.053 crypto/bcmfs: not in enabled drivers build config 00:02:14.053 crypto/caam_jr: not in enabled drivers build config 00:02:14.053 crypto/ccp: not in enabled drivers build config 00:02:14.053 crypto/cnxk: not in enabled drivers build config 00:02:14.053 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.053 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.053 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.053 crypto/mlx5: not in enabled drivers build config 00:02:14.053 crypto/mvsam: not in enabled drivers build config 00:02:14.053 crypto/nitrox: not in enabled drivers build config 00:02:14.053 crypto/null: not in enabled drivers build config 00:02:14.053 crypto/octeontx: not in enabled drivers build config 00:02:14.053 crypto/openssl: not in enabled drivers build config 00:02:14.053 crypto/scheduler: not in enabled drivers build config 00:02:14.053 crypto/uadk: not in enabled drivers build config 00:02:14.053 crypto/virtio: not in enabled drivers build config 00:02:14.053 compress/isal: not in enabled drivers build config 00:02:14.053 compress/mlx5: not in enabled drivers build config 00:02:14.053 compress/nitrox: not in enabled drivers build config 00:02:14.053 compress/octeontx: not in enabled drivers build config 00:02:14.053 compress/zlib: not in enabled drivers build config 00:02:14.053 regex/*: missing internal dependency, "regexdev" 00:02:14.053 ml/*: missing internal dependency, "mldev" 00:02:14.053 vdpa/ifc: not in enabled drivers build config 00:02:14.053 vdpa/mlx5: not in enabled drivers build config 00:02:14.053 vdpa/nfp: not in enabled drivers build config 00:02:14.053 vdpa/sfc: not in enabled drivers build config 00:02:14.053 event/*: missing internal dependency, "eventdev" 00:02:14.053 baseband/*: missing internal dependency, "bbdev" 00:02:14.053 gpu/*: missing internal dependency, "gpudev" 00:02:14.053 00:02:14.053 00:02:14.324 Build targets in project: 85 00:02:14.324 00:02:14.324 DPDK 24.03.0 00:02:14.324 00:02:14.324 User defined options 00:02:14.324 buildtype : debug 00:02:14.324 default_library : shared 00:02:14.324 libdir : lib 00:02:14.324 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.324 b_sanitize : address 00:02:14.324 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.324 c_link_args : 00:02:14.324 cpu_instruction_set: native 00:02:14.324 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:14.324 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:14.324 enable_docs : false 00:02:14.324 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:14.324 enable_kmods : false 00:02:14.324 max_lcores : 128 00:02:14.324 tests : false 00:02:14.324 00:02:14.324 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.594 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.855 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.855 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.855 [3/268] Linking static target lib/librte_kvargs.a 00:02:14.855 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.855 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.855 [6/268] Linking static target lib/librte_log.a 00:02:15.115 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.375 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.376 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.376 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.376 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.376 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.376 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.376 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.635 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.635 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.635 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.635 [18/268] Linking static target lib/librte_telemetry.a 00:02:15.896 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.896 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.896 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.896 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.896 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.896 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.896 [25/268] Linking target lib/librte_log.so.24.1 00:02:16.156 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.156 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.156 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:16.156 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.156 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:16.156 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.415 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.415 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.415 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.415 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.415 [36/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.415 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.674 [38/268] Linking target lib/librte_telemetry.so.24.1 00:02:16.674 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.674 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.674 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.674 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.674 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.674 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.674 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.933 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.933 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.933 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.933 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.933 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.194 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.194 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.194 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.454 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:17.454 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.454 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.454 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:17.454 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:17.454 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:17.714 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:17.714 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.714 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:17.714 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.714 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.974 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.974 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.974 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.234 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.234 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.234 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.234 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.234 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.234 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.234 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.234 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.234 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.494 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.494 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.494 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.494 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.754 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.754 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.754 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.754 [84/268] Linking static target lib/librte_ring.a 00:02:18.754 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.754 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.754 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.754 [88/268] Linking static target lib/librte_eal.a 00:02:19.015 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.015 [90/268] Linking static target lib/librte_rcu.a 00:02:19.015 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.015 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.015 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.015 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.015 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.275 [96/268] Linking static target lib/librte_mempool.a 00:02:19.275 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.275 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.275 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.275 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.535 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.535 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.535 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.535 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.795 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.795 [106/268] Linking static target lib/librte_mbuf.a 00:02:19.795 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.795 [108/268] Linking static target lib/librte_meter.a 00:02:19.795 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.795 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.795 [111/268] Linking static target lib/librte_net.a 00:02:20.055 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.055 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.055 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.055 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.389 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.389 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.389 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.389 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.648 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.648 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.648 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.648 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.907 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.907 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.165 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.165 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.165 [128/268] Linking static target lib/librte_pci.a 00:02:21.165 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.165 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.165 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.165 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.423 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.423 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:21.423 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.423 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.423 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.423 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.423 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.423 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.423 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.423 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.423 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.682 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.682 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.682 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.682 [147/268] Linking static target lib/librte_cmdline.a 00:02:21.682 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.942 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.942 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.942 [151/268] Linking static target lib/librte_timer.a 00:02:21.942 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.201 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.201 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.460 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.460 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.719 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.719 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.719 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.719 [160/268] Linking static target lib/librte_compressdev.a 00:02:22.719 [161/268] Linking static target lib/librte_hash.a 00:02:22.719 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.719 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.978 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.978 [165/268] Linking static target lib/librte_ethdev.a 00:02:22.978 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.978 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.978 [168/268] Linking static target lib/librte_dmadev.a 00:02:22.978 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.238 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.238 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:23.238 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.238 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.498 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.498 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:23.498 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.758 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.758 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:23.758 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.758 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.758 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.758 [182/268] Linking static target lib/librte_cryptodev.a 00:02:23.758 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:23.758 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.017 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.017 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.017 [187/268] Linking static target lib/librte_power.a 00:02:24.277 [188/268] Linking static target lib/librte_reorder.a 00:02:24.277 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.277 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.277 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.277 [192/268] Linking static target lib/librte_security.a 00:02:24.277 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.846 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.846 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.846 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.105 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.106 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.106 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.106 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.367 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.367 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:25.626 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:25.626 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:25.626 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:25.626 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.886 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:25.886 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.886 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:25.886 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:25.886 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.886 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.146 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.146 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.146 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:26.146 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.146 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.146 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.146 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.146 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.146 [221/268] Linking static target drivers/librte_bus_vdev.a 00:02:26.426 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:26.426 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.426 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.426 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:26.426 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.704 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.274 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.183 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.183 [230/268] Linking target lib/librte_eal.so.24.1 00:02:29.443 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:29.443 [232/268] Linking target lib/librte_pci.so.24.1 00:02:29.443 [233/268] Linking target lib/librte_meter.so.24.1 00:02:29.443 [234/268] Linking target lib/librte_ring.so.24.1 00:02:29.443 [235/268] Linking target lib/librte_timer.so.24.1 00:02:29.443 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:29.443 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:29.443 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:29.704 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:29.704 [240/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:29.704 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:29.704 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:29.704 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:29.704 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:29.704 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:29.704 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:29.704 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:29.963 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:29.963 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:29.963 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:29.963 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:29.963 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:29.963 [253/268] Linking target lib/librte_net.so.24.1 00:02:29.963 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:30.223 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:30.223 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:30.223 [257/268] Linking target lib/librte_hash.so.24.1 00:02:30.223 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:30.223 [259/268] Linking target lib/librte_security.so.24.1 00:02:30.223 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:30.793 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.793 [262/268] Linking static target lib/librte_vhost.a 00:02:31.361 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.361 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:31.361 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:31.622 [266/268] Linking target lib/librte_power.so.24.1 00:02:33.534 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.534 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:33.534 INFO: autodetecting backend as ninja 00:02:33.534 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.678 CC lib/log/log.o 00:02:51.678 CC lib/log/log_flags.o 00:02:51.678 CC lib/log/log_deprecated.o 00:02:51.678 CC lib/ut/ut.o 00:02:51.678 CC lib/ut_mock/mock.o 00:02:51.678 LIB libspdk_log.a 00:02:51.678 LIB libspdk_ut.a 00:02:51.678 LIB libspdk_ut_mock.a 00:02:51.678 SO libspdk_log.so.7.1 00:02:51.678 SO libspdk_ut.so.2.0 00:02:51.678 SO libspdk_ut_mock.so.6.0 00:02:51.678 SYMLINK libspdk_ut_mock.so 00:02:51.678 SYMLINK libspdk_ut.so 00:02:51.678 SYMLINK libspdk_log.so 00:02:51.678 CC lib/util/base64.o 00:02:51.678 CC lib/util/bit_array.o 00:02:51.678 CC lib/util/crc32c.o 00:02:51.678 CC lib/util/cpuset.o 00:02:51.678 CC lib/util/crc32.o 00:02:51.678 CC lib/util/crc16.o 00:02:51.678 CC lib/ioat/ioat.o 00:02:51.678 CC lib/dma/dma.o 00:02:51.678 CXX lib/trace_parser/trace.o 00:02:51.678 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.678 CC lib/util/crc32_ieee.o 00:02:51.678 CC lib/util/crc64.o 00:02:51.678 CC lib/util/dif.o 00:02:51.678 CC lib/util/fd.o 00:02:51.678 CC lib/vfio_user/host/vfio_user.o 00:02:51.678 CC lib/util/fd_group.o 00:02:51.678 LIB libspdk_dma.a 00:02:51.678 CC lib/util/file.o 00:02:51.678 SO libspdk_dma.so.5.0 00:02:51.678 CC lib/util/hexlify.o 00:02:51.678 CC lib/util/iov.o 00:02:51.678 LIB libspdk_ioat.a 00:02:51.678 SYMLINK libspdk_dma.so 00:02:51.678 CC lib/util/math.o 00:02:51.678 SO libspdk_ioat.so.7.0 00:02:51.678 CC lib/util/net.o 00:02:51.678 CC lib/util/pipe.o 00:02:51.678 SYMLINK libspdk_ioat.so 00:02:51.678 CC lib/util/strerror_tls.o 00:02:51.678 LIB libspdk_vfio_user.a 00:02:51.678 CC lib/util/string.o 00:02:51.678 SO libspdk_vfio_user.so.5.0 00:02:51.678 CC lib/util/uuid.o 00:02:51.678 CC lib/util/xor.o 00:02:51.678 CC lib/util/zipf.o 00:02:51.678 SYMLINK libspdk_vfio_user.so 00:02:51.678 CC lib/util/md5.o 00:02:51.678 LIB libspdk_util.a 00:02:51.678 SO libspdk_util.so.10.1 00:02:51.678 LIB libspdk_trace_parser.a 00:02:51.678 SYMLINK libspdk_util.so 00:02:51.678 SO libspdk_trace_parser.so.6.0 00:02:51.678 SYMLINK libspdk_trace_parser.so 00:02:51.678 CC lib/rdma_utils/rdma_utils.o 00:02:51.678 CC lib/env_dpdk/env.o 00:02:51.678 CC lib/env_dpdk/pci.o 00:02:51.678 CC lib/env_dpdk/memory.o 00:02:51.678 CC lib/conf/conf.o 00:02:51.678 CC lib/env_dpdk/init.o 00:02:51.678 CC lib/env_dpdk/threads.o 00:02:51.678 CC lib/idxd/idxd.o 00:02:51.678 CC lib/vmd/vmd.o 00:02:51.678 CC lib/json/json_parse.o 00:02:51.678 CC lib/idxd/idxd_user.o 00:02:51.678 LIB libspdk_conf.a 00:02:51.678 SO libspdk_conf.so.6.0 00:02:51.678 LIB libspdk_rdma_utils.a 00:02:51.678 CC lib/json/json_util.o 00:02:51.678 SO libspdk_rdma_utils.so.1.0 00:02:51.678 SYMLINK libspdk_conf.so 00:02:51.678 CC lib/json/json_write.o 00:02:51.678 SYMLINK libspdk_rdma_utils.so 00:02:51.678 CC lib/vmd/led.o 00:02:51.678 CC lib/env_dpdk/pci_ioat.o 00:02:51.678 CC lib/env_dpdk/pci_virtio.o 00:02:51.678 CC lib/env_dpdk/pci_vmd.o 00:02:51.678 CC lib/env_dpdk/pci_idxd.o 00:02:51.678 CC lib/env_dpdk/pci_event.o 00:02:51.678 CC lib/env_dpdk/sigbus_handler.o 00:02:51.678 CC lib/env_dpdk/pci_dpdk.o 00:02:51.679 LIB libspdk_json.a 00:02:51.679 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.679 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.679 SO libspdk_json.so.6.0 00:02:51.679 CC lib/idxd/idxd_kernel.o 00:02:51.679 SYMLINK libspdk_json.so 00:02:51.679 LIB libspdk_idxd.a 00:02:51.679 LIB libspdk_vmd.a 00:02:51.679 SO libspdk_idxd.so.12.1 00:02:51.679 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.679 CC lib/rdma_provider/common.o 00:02:51.679 SO libspdk_vmd.so.6.0 00:02:51.679 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.679 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.679 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.679 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.679 SYMLINK libspdk_vmd.so 00:02:51.679 SYMLINK libspdk_idxd.so 00:02:51.679 LIB libspdk_rdma_provider.a 00:02:51.679 SO libspdk_rdma_provider.so.7.0 00:02:51.679 SYMLINK libspdk_rdma_provider.so 00:02:51.679 LIB libspdk_jsonrpc.a 00:02:51.679 SO libspdk_jsonrpc.so.6.0 00:02:51.939 SYMLINK libspdk_jsonrpc.so 00:02:52.199 LIB libspdk_env_dpdk.a 00:02:52.199 CC lib/rpc/rpc.o 00:02:52.199 SO libspdk_env_dpdk.so.15.1 00:02:52.460 SYMLINK libspdk_env_dpdk.so 00:02:52.460 LIB libspdk_rpc.a 00:02:52.460 SO libspdk_rpc.so.6.0 00:02:52.460 SYMLINK libspdk_rpc.so 00:02:53.031 CC lib/trace/trace_flags.o 00:02:53.031 CC lib/trace/trace.o 00:02:53.031 CC lib/trace/trace_rpc.o 00:02:53.031 CC lib/notify/notify.o 00:02:53.031 CC lib/notify/notify_rpc.o 00:02:53.031 CC lib/keyring/keyring.o 00:02:53.031 CC lib/keyring/keyring_rpc.o 00:02:53.031 LIB libspdk_notify.a 00:02:53.031 SO libspdk_notify.so.6.0 00:02:53.031 LIB libspdk_trace.a 00:02:53.291 LIB libspdk_keyring.a 00:02:53.291 SYMLINK libspdk_notify.so 00:02:53.291 SO libspdk_trace.so.11.0 00:02:53.292 SO libspdk_keyring.so.2.0 00:02:53.292 SYMLINK libspdk_trace.so 00:02:53.292 SYMLINK libspdk_keyring.so 00:02:53.551 CC lib/thread/iobuf.o 00:02:53.551 CC lib/thread/thread.o 00:02:53.551 CC lib/sock/sock.o 00:02:53.551 CC lib/sock/sock_rpc.o 00:02:54.120 LIB libspdk_sock.a 00:02:54.120 SO libspdk_sock.so.10.0 00:02:54.120 SYMLINK libspdk_sock.so 00:02:54.690 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.690 CC lib/nvme/nvme_ctrlr.o 00:02:54.690 CC lib/nvme/nvme_fabric.o 00:02:54.690 CC lib/nvme/nvme_ns_cmd.o 00:02:54.690 CC lib/nvme/nvme_ns.o 00:02:54.690 CC lib/nvme/nvme_pcie_common.o 00:02:54.690 CC lib/nvme/nvme_qpair.o 00:02:54.690 CC lib/nvme/nvme_pcie.o 00:02:54.690 CC lib/nvme/nvme.o 00:02:55.258 LIB libspdk_thread.a 00:02:55.258 SO libspdk_thread.so.11.0 00:02:55.258 CC lib/nvme/nvme_quirks.o 00:02:55.258 CC lib/nvme/nvme_transport.o 00:02:55.258 SYMLINK libspdk_thread.so 00:02:55.258 CC lib/nvme/nvme_discovery.o 00:02:55.258 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.258 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.518 CC lib/nvme/nvme_tcp.o 00:02:55.518 CC lib/nvme/nvme_opal.o 00:02:55.518 CC lib/nvme/nvme_io_msg.o 00:02:55.778 CC lib/nvme/nvme_poll_group.o 00:02:55.778 CC lib/nvme/nvme_zns.o 00:02:55.778 CC lib/nvme/nvme_stubs.o 00:02:55.778 CC lib/nvme/nvme_auth.o 00:02:56.037 CC lib/accel/accel.o 00:02:56.037 CC lib/nvme/nvme_cuse.o 00:02:56.037 CC lib/nvme/nvme_rdma.o 00:02:56.297 CC lib/accel/accel_rpc.o 00:02:56.297 CC lib/accel/accel_sw.o 00:02:56.297 CC lib/blob/blobstore.o 00:02:56.557 CC lib/init/json_config.o 00:02:56.557 CC lib/virtio/virtio.o 00:02:56.557 CC lib/init/subsystem.o 00:02:56.557 CC lib/init/subsystem_rpc.o 00:02:56.816 CC lib/init/rpc.o 00:02:56.816 CC lib/virtio/virtio_vhost_user.o 00:02:56.816 CC lib/virtio/virtio_vfio_user.o 00:02:56.816 LIB libspdk_init.a 00:02:56.816 CC lib/blob/request.o 00:02:56.816 CC lib/fsdev/fsdev.o 00:02:57.074 SO libspdk_init.so.6.0 00:02:57.074 CC lib/fsdev/fsdev_io.o 00:02:57.074 SYMLINK libspdk_init.so 00:02:57.074 CC lib/fsdev/fsdev_rpc.o 00:02:57.074 CC lib/virtio/virtio_pci.o 00:02:57.074 CC lib/blob/zeroes.o 00:02:57.074 CC lib/blob/blob_bs_dev.o 00:02:57.074 CC lib/event/app.o 00:02:57.333 CC lib/event/reactor.o 00:02:57.333 LIB libspdk_accel.a 00:02:57.333 SO libspdk_accel.so.16.0 00:02:57.333 CC lib/event/log_rpc.o 00:02:57.333 CC lib/event/app_rpc.o 00:02:57.333 SYMLINK libspdk_accel.so 00:02:57.333 LIB libspdk_virtio.a 00:02:57.333 CC lib/event/scheduler_static.o 00:02:57.333 SO libspdk_virtio.so.7.0 00:02:57.592 SYMLINK libspdk_virtio.so 00:02:57.592 CC lib/bdev/bdev.o 00:02:57.592 CC lib/bdev/bdev_rpc.o 00:02:57.592 CC lib/bdev/bdev_zone.o 00:02:57.592 LIB libspdk_nvme.a 00:02:57.592 CC lib/bdev/part.o 00:02:57.592 LIB libspdk_fsdev.a 00:02:57.592 CC lib/bdev/scsi_nvme.o 00:02:57.592 SO libspdk_fsdev.so.2.0 00:02:57.852 SO libspdk_nvme.so.15.0 00:02:57.852 LIB libspdk_event.a 00:02:57.852 SYMLINK libspdk_fsdev.so 00:02:57.852 SO libspdk_event.so.14.0 00:02:57.852 SYMLINK libspdk_event.so 00:02:58.111 SYMLINK libspdk_nvme.so 00:02:58.111 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.678 LIB libspdk_fuse_dispatcher.a 00:02:58.678 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.678 SYMLINK libspdk_fuse_dispatcher.so 00:03:00.054 LIB libspdk_blob.a 00:03:00.054 SO libspdk_blob.so.12.0 00:03:00.312 SYMLINK libspdk_blob.so 00:03:00.312 LIB libspdk_bdev.a 00:03:00.570 SO libspdk_bdev.so.17.0 00:03:00.570 SYMLINK libspdk_bdev.so 00:03:00.570 CC lib/lvol/lvol.o 00:03:00.570 CC lib/blobfs/blobfs.o 00:03:00.570 CC lib/blobfs/tree.o 00:03:00.827 CC lib/nvmf/ctrlr.o 00:03:00.827 CC lib/nvmf/ctrlr_discovery.o 00:03:00.827 CC lib/nbd/nbd.o 00:03:00.827 CC lib/nvmf/ctrlr_bdev.o 00:03:00.827 CC lib/scsi/dev.o 00:03:00.827 CC lib/ublk/ublk.o 00:03:00.827 CC lib/ftl/ftl_core.o 00:03:00.827 CC lib/ftl/ftl_init.o 00:03:01.084 CC lib/scsi/lun.o 00:03:01.084 CC lib/scsi/port.o 00:03:01.084 CC lib/nbd/nbd_rpc.o 00:03:01.084 CC lib/nvmf/subsystem.o 00:03:01.084 CC lib/ftl/ftl_layout.o 00:03:01.341 CC lib/scsi/scsi.o 00:03:01.341 CC lib/scsi/scsi_bdev.o 00:03:01.341 LIB libspdk_nbd.a 00:03:01.341 SO libspdk_nbd.so.7.0 00:03:01.341 SYMLINK libspdk_nbd.so 00:03:01.341 CC lib/ftl/ftl_debug.o 00:03:01.341 CC lib/ftl/ftl_io.o 00:03:01.600 CC lib/ublk/ublk_rpc.o 00:03:01.600 CC lib/ftl/ftl_sb.o 00:03:01.600 CC lib/ftl/ftl_l2p.o 00:03:01.600 LIB libspdk_blobfs.a 00:03:01.600 SO libspdk_blobfs.so.11.0 00:03:01.600 LIB libspdk_ublk.a 00:03:01.600 SO libspdk_ublk.so.3.0 00:03:01.600 CC lib/ftl/ftl_l2p_flat.o 00:03:01.600 SYMLINK libspdk_blobfs.so 00:03:01.600 CC lib/ftl/ftl_nv_cache.o 00:03:01.600 LIB libspdk_lvol.a 00:03:01.600 CC lib/ftl/ftl_band.o 00:03:01.858 SO libspdk_lvol.so.11.0 00:03:01.858 CC lib/ftl/ftl_band_ops.o 00:03:01.858 SYMLINK libspdk_ublk.so 00:03:01.858 CC lib/ftl/ftl_writer.o 00:03:01.858 CC lib/nvmf/nvmf.o 00:03:01.858 SYMLINK libspdk_lvol.so 00:03:01.858 CC lib/ftl/ftl_rq.o 00:03:01.858 CC lib/scsi/scsi_pr.o 00:03:01.858 CC lib/ftl/ftl_reloc.o 00:03:02.116 CC lib/ftl/ftl_l2p_cache.o 00:03:02.116 CC lib/ftl/ftl_p2l.o 00:03:02.116 CC lib/ftl/ftl_p2l_log.o 00:03:02.116 CC lib/nvmf/nvmf_rpc.o 00:03:02.374 CC lib/scsi/scsi_rpc.o 00:03:02.374 CC lib/nvmf/transport.o 00:03:02.374 CC lib/scsi/task.o 00:03:02.374 CC lib/nvmf/tcp.o 00:03:02.374 CC lib/nvmf/stubs.o 00:03:02.633 CC lib/nvmf/mdns_server.o 00:03:02.633 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.633 LIB libspdk_scsi.a 00:03:02.633 SO libspdk_scsi.so.9.0 00:03:02.633 CC lib/nvmf/rdma.o 00:03:02.891 SYMLINK libspdk_scsi.so 00:03:02.891 CC lib/nvmf/auth.o 00:03:02.891 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.891 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.149 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.149 CC lib/iscsi/conn.o 00:03:03.149 CC lib/iscsi/init_grp.o 00:03:03.149 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.149 CC lib/vhost/vhost.o 00:03:03.149 CC lib/iscsi/iscsi.o 00:03:03.149 CC lib/vhost/vhost_rpc.o 00:03:03.149 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.407 CC lib/iscsi/param.o 00:03:03.407 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.407 CC lib/vhost/vhost_scsi.o 00:03:03.665 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.665 CC lib/iscsi/portal_grp.o 00:03:03.665 CC lib/iscsi/tgt_node.o 00:03:03.665 CC lib/iscsi/iscsi_subsystem.o 00:03:03.665 CC lib/iscsi/iscsi_rpc.o 00:03:03.927 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.927 CC lib/iscsi/task.o 00:03:03.927 CC lib/vhost/vhost_blk.o 00:03:04.189 CC lib/vhost/rte_vhost_user.o 00:03:04.189 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.189 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.189 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.189 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.189 CC lib/ftl/utils/ftl_conf.o 00:03:04.448 CC lib/ftl/utils/ftl_md.o 00:03:04.448 CC lib/ftl/utils/ftl_mempool.o 00:03:04.448 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.448 CC lib/ftl/utils/ftl_property.o 00:03:04.448 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.706 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.706 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.706 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.706 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.706 LIB libspdk_iscsi.a 00:03:04.706 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.706 SO libspdk_iscsi.so.8.0 00:03:04.964 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.964 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.964 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.964 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.964 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.964 SYMLINK libspdk_iscsi.so 00:03:04.964 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.964 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.964 CC lib/ftl/base/ftl_base_dev.o 00:03:04.964 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.964 CC lib/ftl/ftl_trace.o 00:03:05.222 LIB libspdk_vhost.a 00:03:05.222 LIB libspdk_nvmf.a 00:03:05.222 SO libspdk_vhost.so.8.0 00:03:05.480 LIB libspdk_ftl.a 00:03:05.480 SYMLINK libspdk_vhost.so 00:03:05.480 SO libspdk_nvmf.so.20.0 00:03:05.738 SO libspdk_ftl.so.9.0 00:03:05.738 SYMLINK libspdk_nvmf.so 00:03:05.996 SYMLINK libspdk_ftl.so 00:03:06.254 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.254 CC module/accel/ioat/accel_ioat.o 00:03:06.254 CC module/keyring/linux/keyring.o 00:03:06.254 CC module/sock/posix/posix.o 00:03:06.254 CC module/accel/error/accel_error.o 00:03:06.254 CC module/keyring/file/keyring.o 00:03:06.254 CC module/accel/dsa/accel_dsa.o 00:03:06.254 CC module/fsdev/aio/fsdev_aio.o 00:03:06.254 CC module/blob/bdev/blob_bdev.o 00:03:06.254 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.512 LIB libspdk_env_dpdk_rpc.a 00:03:06.512 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.512 CC module/keyring/linux/keyring_rpc.o 00:03:06.512 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.512 CC module/keyring/file/keyring_rpc.o 00:03:06.512 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.512 LIB libspdk_scheduler_dynamic.a 00:03:06.512 CC module/accel/error/accel_error_rpc.o 00:03:06.512 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.770 LIB libspdk_keyring_linux.a 00:03:06.770 LIB libspdk_keyring_file.a 00:03:06.770 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.770 CC module/accel/iaa/accel_iaa.o 00:03:06.770 LIB libspdk_blob_bdev.a 00:03:06.770 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.770 SO libspdk_keyring_linux.so.1.0 00:03:06.770 SO libspdk_keyring_file.so.2.0 00:03:06.770 SO libspdk_blob_bdev.so.12.0 00:03:06.770 LIB libspdk_accel_ioat.a 00:03:06.770 SO libspdk_accel_ioat.so.6.0 00:03:06.770 LIB libspdk_accel_error.a 00:03:06.770 SYMLINK libspdk_keyring_linux.so 00:03:06.770 SYMLINK libspdk_keyring_file.so 00:03:06.770 SO libspdk_accel_error.so.2.0 00:03:06.770 SYMLINK libspdk_blob_bdev.so 00:03:06.770 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.770 SYMLINK libspdk_accel_ioat.so 00:03:06.770 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.770 SYMLINK libspdk_accel_error.so 00:03:06.770 LIB libspdk_accel_dsa.a 00:03:06.770 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:06.770 SO libspdk_accel_dsa.so.5.0 00:03:07.028 LIB libspdk_accel_iaa.a 00:03:07.028 SYMLINK libspdk_accel_dsa.so 00:03:07.028 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.028 CC module/fsdev/aio/linux_aio_mgr.o 00:03:07.028 SO libspdk_accel_iaa.so.3.0 00:03:07.028 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.028 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.028 CC module/bdev/delay/vbdev_delay.o 00:03:07.028 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.028 SYMLINK libspdk_accel_iaa.so 00:03:07.028 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.028 CC module/bdev/error/vbdev_error.o 00:03:07.028 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.028 LIB libspdk_scheduler_gscheduler.a 00:03:07.287 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.287 CC module/bdev/gpt/gpt.o 00:03:07.287 LIB libspdk_fsdev_aio.a 00:03:07.287 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.287 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.287 SO libspdk_fsdev_aio.so.1.0 00:03:07.287 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.287 LIB libspdk_sock_posix.a 00:03:07.287 SO libspdk_sock_posix.so.6.0 00:03:07.287 SYMLINK libspdk_fsdev_aio.so 00:03:07.287 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.287 CC module/bdev/malloc/bdev_malloc.o 00:03:07.287 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.287 SYMLINK libspdk_sock_posix.so 00:03:07.287 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.287 CC module/bdev/null/bdev_null.o 00:03:07.545 CC module/bdev/null/bdev_null_rpc.o 00:03:07.545 LIB libspdk_blobfs_bdev.a 00:03:07.545 CC module/bdev/nvme/bdev_nvme.o 00:03:07.545 LIB libspdk_bdev_delay.a 00:03:07.545 SO libspdk_blobfs_bdev.so.6.0 00:03:07.545 SO libspdk_bdev_delay.so.6.0 00:03:07.545 LIB libspdk_bdev_error.a 00:03:07.545 SYMLINK libspdk_blobfs_bdev.so 00:03:07.545 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.545 SO libspdk_bdev_error.so.6.0 00:03:07.545 SYMLINK libspdk_bdev_delay.so 00:03:07.804 CC module/bdev/nvme/nvme_rpc.o 00:03:07.804 SYMLINK libspdk_bdev_error.so 00:03:07.804 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.804 CC module/bdev/nvme/vbdev_opal.o 00:03:07.804 LIB libspdk_bdev_null.a 00:03:07.804 LIB libspdk_bdev_gpt.a 00:03:07.804 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.804 SO libspdk_bdev_null.so.6.0 00:03:07.804 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.804 SO libspdk_bdev_gpt.so.6.0 00:03:07.804 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.804 SYMLINK libspdk_bdev_null.so 00:03:07.804 SYMLINK libspdk_bdev_gpt.so 00:03:07.804 LIB libspdk_bdev_lvol.a 00:03:08.062 SO libspdk_bdev_lvol.so.6.0 00:03:08.062 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.062 LIB libspdk_bdev_malloc.a 00:03:08.062 SO libspdk_bdev_malloc.so.6.0 00:03:08.062 CC module/bdev/split/vbdev_split.o 00:03:08.062 CC module/bdev/raid/bdev_raid.o 00:03:08.062 SYMLINK libspdk_bdev_lvol.so 00:03:08.062 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.062 SYMLINK libspdk_bdev_malloc.so 00:03:08.062 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.062 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.321 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.321 LIB libspdk_bdev_passthru.a 00:03:08.321 SO libspdk_bdev_passthru.so.6.0 00:03:08.321 CC module/bdev/ftl/bdev_ftl.o 00:03:08.321 CC module/bdev/aio/bdev_aio.o 00:03:08.321 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.321 LIB libspdk_bdev_split.a 00:03:08.321 SYMLINK libspdk_bdev_passthru.so 00:03:08.321 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.321 SO libspdk_bdev_split.so.6.0 00:03:08.321 CC module/bdev/raid/raid0.o 00:03:08.321 SYMLINK libspdk_bdev_split.so 00:03:08.580 CC module/bdev/raid/raid1.o 00:03:08.580 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.580 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.580 LIB libspdk_bdev_ftl.a 00:03:08.580 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.580 SO libspdk_bdev_ftl.so.6.0 00:03:08.580 CC module/bdev/raid/concat.o 00:03:08.580 CC module/bdev/raid/raid5f.o 00:03:08.839 SYMLINK libspdk_bdev_ftl.so 00:03:08.840 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.840 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.840 LIB libspdk_bdev_zone_block.a 00:03:08.840 SO libspdk_bdev_zone_block.so.6.0 00:03:08.840 LIB libspdk_bdev_aio.a 00:03:08.840 SYMLINK libspdk_bdev_zone_block.so 00:03:08.840 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.840 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.840 SO libspdk_bdev_aio.so.6.0 00:03:09.098 SYMLINK libspdk_bdev_aio.so 00:03:09.098 LIB libspdk_bdev_iscsi.a 00:03:09.098 SO libspdk_bdev_iscsi.so.6.0 00:03:09.098 SYMLINK libspdk_bdev_iscsi.so 00:03:09.356 LIB libspdk_bdev_virtio.a 00:03:09.356 SO libspdk_bdev_virtio.so.6.0 00:03:09.356 LIB libspdk_bdev_raid.a 00:03:09.356 SYMLINK libspdk_bdev_virtio.so 00:03:09.356 SO libspdk_bdev_raid.so.6.0 00:03:09.614 SYMLINK libspdk_bdev_raid.so 00:03:10.988 LIB libspdk_bdev_nvme.a 00:03:10.988 SO libspdk_bdev_nvme.so.7.1 00:03:10.988 SYMLINK libspdk_bdev_nvme.so 00:03:11.563 CC module/event/subsystems/vmd/vmd.o 00:03:11.563 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.563 CC module/event/subsystems/sock/sock.o 00:03:11.563 CC module/event/subsystems/keyring/keyring.o 00:03:11.563 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.563 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.563 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.563 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.563 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.834 LIB libspdk_event_vmd.a 00:03:11.834 LIB libspdk_event_fsdev.a 00:03:11.834 LIB libspdk_event_keyring.a 00:03:11.834 LIB libspdk_event_scheduler.a 00:03:11.834 LIB libspdk_event_sock.a 00:03:11.834 SO libspdk_event_fsdev.so.1.0 00:03:11.834 SO libspdk_event_vmd.so.6.0 00:03:11.834 SO libspdk_event_keyring.so.1.0 00:03:11.834 SO libspdk_event_scheduler.so.4.0 00:03:11.834 SO libspdk_event_sock.so.5.0 00:03:11.834 LIB libspdk_event_vhost_blk.a 00:03:11.834 LIB libspdk_event_iobuf.a 00:03:11.834 SYMLINK libspdk_event_fsdev.so 00:03:11.834 SYMLINK libspdk_event_vmd.so 00:03:11.834 SO libspdk_event_vhost_blk.so.3.0 00:03:11.834 SYMLINK libspdk_event_keyring.so 00:03:11.834 SYMLINK libspdk_event_scheduler.so 00:03:11.834 SO libspdk_event_iobuf.so.3.0 00:03:11.834 SYMLINK libspdk_event_sock.so 00:03:11.834 SYMLINK libspdk_event_vhost_blk.so 00:03:11.834 SYMLINK libspdk_event_iobuf.so 00:03:12.402 CC module/event/subsystems/accel/accel.o 00:03:12.402 LIB libspdk_event_accel.a 00:03:12.661 SO libspdk_event_accel.so.6.0 00:03:12.661 SYMLINK libspdk_event_accel.so 00:03:12.920 CC module/event/subsystems/bdev/bdev.o 00:03:13.179 LIB libspdk_event_bdev.a 00:03:13.179 SO libspdk_event_bdev.so.6.0 00:03:13.439 SYMLINK libspdk_event_bdev.so 00:03:13.698 CC module/event/subsystems/nbd/nbd.o 00:03:13.698 CC module/event/subsystems/ublk/ublk.o 00:03:13.698 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.698 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.698 CC module/event/subsystems/scsi/scsi.o 00:03:13.698 LIB libspdk_event_ublk.a 00:03:13.698 LIB libspdk_event_nbd.a 00:03:13.698 SO libspdk_event_ublk.so.3.0 00:03:13.958 SO libspdk_event_nbd.so.6.0 00:03:13.958 LIB libspdk_event_scsi.a 00:03:13.958 SO libspdk_event_scsi.so.6.0 00:03:13.958 SYMLINK libspdk_event_ublk.so 00:03:13.958 SYMLINK libspdk_event_nbd.so 00:03:13.958 LIB libspdk_event_nvmf.a 00:03:13.958 SYMLINK libspdk_event_scsi.so 00:03:13.958 SO libspdk_event_nvmf.so.6.0 00:03:13.958 SYMLINK libspdk_event_nvmf.so 00:03:14.217 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.217 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.476 LIB libspdk_event_iscsi.a 00:03:14.476 LIB libspdk_event_vhost_scsi.a 00:03:14.476 SO libspdk_event_iscsi.so.6.0 00:03:14.476 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.476 SYMLINK libspdk_event_iscsi.so 00:03:14.735 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.735 SO libspdk.so.6.0 00:03:14.735 SYMLINK libspdk.so 00:03:14.994 CC app/spdk_lspci/spdk_lspci.o 00:03:14.994 CC app/trace_record/trace_record.o 00:03:14.994 CXX app/trace/trace.o 00:03:15.254 CC app/nvmf_tgt/nvmf_main.o 00:03:15.254 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.254 CC test/thread/poller_perf/poller_perf.o 00:03:15.254 CC examples/util/zipf/zipf.o 00:03:15.254 CC app/spdk_tgt/spdk_tgt.o 00:03:15.254 CC test/app/bdev_svc/bdev_svc.o 00:03:15.254 LINK spdk_lspci 00:03:15.254 CC test/dma/test_dma/test_dma.o 00:03:15.254 LINK iscsi_tgt 00:03:15.254 LINK zipf 00:03:15.514 LINK nvmf_tgt 00:03:15.514 LINK poller_perf 00:03:15.514 LINK spdk_trace_record 00:03:15.514 LINK bdev_svc 00:03:15.514 LINK spdk_trace 00:03:15.514 LINK spdk_tgt 00:03:15.514 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.774 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.774 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.774 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.774 CC examples/ioat/perf/perf.o 00:03:15.774 CC examples/idxd/perf/perf.o 00:03:15.774 TEST_HEADER include/spdk/accel.h 00:03:15.774 TEST_HEADER include/spdk/accel_module.h 00:03:15.774 TEST_HEADER include/spdk/assert.h 00:03:15.774 TEST_HEADER include/spdk/barrier.h 00:03:15.774 CC test/app/histogram_perf/histogram_perf.o 00:03:15.774 TEST_HEADER include/spdk/base64.h 00:03:15.774 TEST_HEADER include/spdk/bdev.h 00:03:15.774 TEST_HEADER include/spdk/bdev_module.h 00:03:15.774 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.774 TEST_HEADER include/spdk/bit_array.h 00:03:15.774 TEST_HEADER include/spdk/bit_pool.h 00:03:15.774 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.774 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.774 TEST_HEADER include/spdk/blobfs.h 00:03:15.774 TEST_HEADER include/spdk/blob.h 00:03:15.774 TEST_HEADER include/spdk/conf.h 00:03:15.774 TEST_HEADER include/spdk/config.h 00:03:15.774 TEST_HEADER include/spdk/cpuset.h 00:03:15.774 TEST_HEADER include/spdk/crc16.h 00:03:15.774 TEST_HEADER include/spdk/crc32.h 00:03:15.774 TEST_HEADER include/spdk/crc64.h 00:03:15.774 TEST_HEADER include/spdk/dif.h 00:03:15.774 TEST_HEADER include/spdk/dma.h 00:03:16.036 TEST_HEADER include/spdk/endian.h 00:03:16.036 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.036 TEST_HEADER include/spdk/env.h 00:03:16.036 TEST_HEADER include/spdk/event.h 00:03:16.036 TEST_HEADER include/spdk/fd_group.h 00:03:16.036 TEST_HEADER include/spdk/fd.h 00:03:16.036 TEST_HEADER include/spdk/file.h 00:03:16.036 TEST_HEADER include/spdk/fsdev.h 00:03:16.036 CC app/spdk_nvme_perf/perf.o 00:03:16.036 TEST_HEADER include/spdk/fsdev_module.h 00:03:16.036 TEST_HEADER include/spdk/ftl.h 00:03:16.036 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:16.036 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.036 TEST_HEADER include/spdk/hexlify.h 00:03:16.036 TEST_HEADER include/spdk/histogram_data.h 00:03:16.036 TEST_HEADER include/spdk/idxd.h 00:03:16.036 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.036 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.036 TEST_HEADER include/spdk/init.h 00:03:16.036 TEST_HEADER include/spdk/ioat.h 00:03:16.036 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.036 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.036 TEST_HEADER include/spdk/json.h 00:03:16.036 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.036 TEST_HEADER include/spdk/keyring.h 00:03:16.036 TEST_HEADER include/spdk/keyring_module.h 00:03:16.036 TEST_HEADER include/spdk/likely.h 00:03:16.036 TEST_HEADER include/spdk/log.h 00:03:16.036 TEST_HEADER include/spdk/lvol.h 00:03:16.036 TEST_HEADER include/spdk/md5.h 00:03:16.036 TEST_HEADER include/spdk/memory.h 00:03:16.036 LINK lsvmd 00:03:16.036 TEST_HEADER include/spdk/mmio.h 00:03:16.036 TEST_HEADER include/spdk/nbd.h 00:03:16.036 TEST_HEADER include/spdk/net.h 00:03:16.036 TEST_HEADER include/spdk/notify.h 00:03:16.036 TEST_HEADER include/spdk/nvme.h 00:03:16.036 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.036 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.036 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.036 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.036 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.036 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.036 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.036 TEST_HEADER include/spdk/nvmf.h 00:03:16.036 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.036 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.036 TEST_HEADER include/spdk/opal.h 00:03:16.036 TEST_HEADER include/spdk/opal_spec.h 00:03:16.036 TEST_HEADER include/spdk/pci_ids.h 00:03:16.036 TEST_HEADER include/spdk/pipe.h 00:03:16.036 TEST_HEADER include/spdk/queue.h 00:03:16.036 TEST_HEADER include/spdk/reduce.h 00:03:16.036 TEST_HEADER include/spdk/rpc.h 00:03:16.036 TEST_HEADER include/spdk/scheduler.h 00:03:16.036 TEST_HEADER include/spdk/scsi.h 00:03:16.036 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.036 LINK test_dma 00:03:16.036 TEST_HEADER include/spdk/sock.h 00:03:16.036 LINK ioat_perf 00:03:16.036 TEST_HEADER include/spdk/stdinc.h 00:03:16.036 TEST_HEADER include/spdk/string.h 00:03:16.036 TEST_HEADER include/spdk/thread.h 00:03:16.036 TEST_HEADER include/spdk/trace.h 00:03:16.036 TEST_HEADER include/spdk/trace_parser.h 00:03:16.036 TEST_HEADER include/spdk/tree.h 00:03:16.036 TEST_HEADER include/spdk/ublk.h 00:03:16.036 TEST_HEADER include/spdk/util.h 00:03:16.036 TEST_HEADER include/spdk/uuid.h 00:03:16.036 TEST_HEADER include/spdk/version.h 00:03:16.036 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.036 LINK histogram_perf 00:03:16.036 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.036 TEST_HEADER include/spdk/vhost.h 00:03:16.036 TEST_HEADER include/spdk/vmd.h 00:03:16.036 TEST_HEADER include/spdk/xor.h 00:03:16.036 TEST_HEADER include/spdk/zipf.h 00:03:16.036 CXX test/cpp_headers/accel.o 00:03:16.036 LINK nvme_fuzz 00:03:16.036 LINK idxd_perf 00:03:16.297 CXX test/cpp_headers/accel_module.o 00:03:16.297 CC examples/ioat/verify/verify.o 00:03:16.297 CC test/app/jsoncat/jsoncat.o 00:03:16.297 CC test/app/stub/stub.o 00:03:16.297 CC examples/vmd/led/led.o 00:03:16.297 CC app/spdk_nvme_identify/identify.o 00:03:16.297 CXX test/cpp_headers/assert.o 00:03:16.297 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.557 LINK jsoncat 00:03:16.557 LINK led 00:03:16.557 LINK verify 00:03:16.557 LINK stub 00:03:16.557 CXX test/cpp_headers/barrier.o 00:03:16.557 CXX test/cpp_headers/base64.o 00:03:16.557 LINK spdk_nvme_discover 00:03:16.557 CXX test/cpp_headers/bdev.o 00:03:16.816 LINK vhost_fuzz 00:03:16.816 CXX test/cpp_headers/bdev_module.o 00:03:16.816 CC app/spdk_top/spdk_top.o 00:03:16.816 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.816 CXX test/cpp_headers/bdev_zone.o 00:03:16.816 CXX test/cpp_headers/bit_array.o 00:03:17.074 CC app/vhost/vhost.o 00:03:17.074 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.074 CC test/event/event_perf/event_perf.o 00:03:17.074 CC test/event/reactor/reactor.o 00:03:17.074 LINK vhost 00:03:17.074 LINK interrupt_tgt 00:03:17.333 CXX test/cpp_headers/bit_pool.o 00:03:17.333 LINK event_perf 00:03:17.333 LINK spdk_nvme_perf 00:03:17.333 CXX test/cpp_headers/blob_bdev.o 00:03:17.333 LINK reactor 00:03:17.592 LINK spdk_nvme_identify 00:03:17.592 CC test/event/app_repeat/app_repeat.o 00:03:17.592 CC test/event/reactor_perf/reactor_perf.o 00:03:17.592 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.592 CC examples/thread/thread/thread_ex.o 00:03:17.592 CC examples/sock/hello_world/hello_sock.o 00:03:17.592 LINK mem_callbacks 00:03:17.850 LINK reactor_perf 00:03:17.850 LINK app_repeat 00:03:17.850 CC test/event/scheduler/scheduler.o 00:03:17.850 CXX test/cpp_headers/blobfs.o 00:03:17.850 CC test/nvme/aer/aer.o 00:03:17.850 LINK iscsi_fuzz 00:03:17.850 LINK thread 00:03:17.850 CXX test/cpp_headers/blob.o 00:03:17.850 CC test/env/vtophys/vtophys.o 00:03:17.850 LINK spdk_top 00:03:17.850 CXX test/cpp_headers/conf.o 00:03:18.108 LINK hello_sock 00:03:18.108 LINK scheduler 00:03:18.108 CC test/rpc_client/rpc_client_test.o 00:03:18.108 LINK vtophys 00:03:18.108 CXX test/cpp_headers/config.o 00:03:18.108 CXX test/cpp_headers/cpuset.o 00:03:18.108 LINK aer 00:03:18.366 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.366 CC app/spdk_dd/spdk_dd.o 00:03:18.366 CC test/env/memory/memory_ut.o 00:03:18.366 LINK rpc_client_test 00:03:18.366 CXX test/cpp_headers/crc16.o 00:03:18.366 CC test/accel/dif/dif.o 00:03:18.366 CC examples/nvme/hello_world/hello_world.o 00:03:18.366 CC test/env/pci/pci_ut.o 00:03:18.366 LINK env_dpdk_post_init 00:03:18.366 CXX test/cpp_headers/crc32.o 00:03:18.366 CC test/nvme/reset/reset.o 00:03:18.625 CC test/blobfs/mkfs/mkfs.o 00:03:18.625 LINK hello_world 00:03:18.625 CXX test/cpp_headers/crc64.o 00:03:18.625 LINK spdk_dd 00:03:18.625 CC test/lvol/esnap/esnap.o 00:03:18.625 CC test/nvme/sgl/sgl.o 00:03:18.625 LINK mkfs 00:03:18.883 LINK reset 00:03:18.883 CXX test/cpp_headers/dif.o 00:03:18.883 CC examples/nvme/reconnect/reconnect.o 00:03:18.883 LINK pci_ut 00:03:18.883 CXX test/cpp_headers/dma.o 00:03:19.141 LINK sgl 00:03:19.141 CC test/nvme/e2edp/nvme_dp.o 00:03:19.141 CC app/fio/nvme/fio_plugin.o 00:03:19.141 LINK dif 00:03:19.141 CXX test/cpp_headers/endian.o 00:03:19.141 CXX test/cpp_headers/env_dpdk.o 00:03:19.141 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:19.398 LINK reconnect 00:03:19.398 CC test/nvme/overhead/overhead.o 00:03:19.398 CXX test/cpp_headers/env.o 00:03:19.398 LINK nvme_dp 00:03:19.399 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.399 CC examples/nvme/arbitration/arbitration.o 00:03:19.399 LINK hello_fsdev 00:03:19.655 CXX test/cpp_headers/event.o 00:03:19.655 CXX test/cpp_headers/fd_group.o 00:03:19.655 LINK memory_ut 00:03:19.655 LINK overhead 00:03:19.655 CC examples/accel/perf/accel_perf.o 00:03:19.655 CXX test/cpp_headers/fd.o 00:03:19.655 LINK spdk_nvme 00:03:19.912 CC app/fio/bdev/fio_plugin.o 00:03:19.912 CXX test/cpp_headers/file.o 00:03:19.912 LINK arbitration 00:03:19.912 CC test/nvme/err_injection/err_injection.o 00:03:19.912 CC examples/blob/hello_world/hello_blob.o 00:03:19.912 CC test/nvme/startup/startup.o 00:03:19.912 CXX test/cpp_headers/fsdev.o 00:03:20.170 LINK nvme_manage 00:03:20.170 LINK err_injection 00:03:20.170 CC test/bdev/bdevio/bdevio.o 00:03:20.170 LINK startup 00:03:20.170 CC examples/nvme/hotplug/hotplug.o 00:03:20.170 CXX test/cpp_headers/fsdev_module.o 00:03:20.170 LINK hello_blob 00:03:20.170 LINK accel_perf 00:03:20.170 CXX test/cpp_headers/ftl.o 00:03:20.428 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.428 LINK spdk_bdev 00:03:20.428 LINK hotplug 00:03:20.428 CC test/nvme/reserve/reserve.o 00:03:20.428 CC test/nvme/simple_copy/simple_copy.o 00:03:20.428 CXX test/cpp_headers/fuse_dispatcher.o 00:03:20.428 CC test/nvme/connect_stress/connect_stress.o 00:03:20.687 CC examples/blob/cli/blobcli.o 00:03:20.687 LINK cmb_copy 00:03:20.687 LINK bdevio 00:03:20.687 CXX test/cpp_headers/gpt_spec.o 00:03:20.687 LINK reserve 00:03:20.687 CC examples/nvme/abort/abort.o 00:03:20.687 LINK connect_stress 00:03:20.687 LINK simple_copy 00:03:20.687 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.946 CXX test/cpp_headers/hexlify.o 00:03:20.946 CXX test/cpp_headers/histogram_data.o 00:03:20.946 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.946 CC test/nvme/boot_partition/boot_partition.o 00:03:20.946 CXX test/cpp_headers/idxd.o 00:03:20.946 CC test/nvme/compliance/nvme_compliance.o 00:03:20.946 LINK hello_bdev 00:03:20.946 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.946 LINK pmr_persistence 00:03:21.204 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.204 LINK blobcli 00:03:21.204 LINK boot_partition 00:03:21.204 LINK abort 00:03:21.204 CXX test/cpp_headers/idxd_spec.o 00:03:21.204 CXX test/cpp_headers/init.o 00:03:21.204 LINK fused_ordering 00:03:21.204 LINK doorbell_aers 00:03:21.461 CXX test/cpp_headers/ioat.o 00:03:21.461 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.461 CXX test/cpp_headers/ioat_spec.o 00:03:21.461 LINK nvme_compliance 00:03:21.461 CXX test/cpp_headers/iscsi_spec.o 00:03:21.461 CC test/nvme/fdp/fdp.o 00:03:21.461 CXX test/cpp_headers/json.o 00:03:21.461 CC test/nvme/cuse/cuse.o 00:03:21.461 CXX test/cpp_headers/jsonrpc.o 00:03:21.461 CXX test/cpp_headers/keyring.o 00:03:21.461 CXX test/cpp_headers/keyring_module.o 00:03:21.719 CXX test/cpp_headers/likely.o 00:03:21.719 CXX test/cpp_headers/log.o 00:03:21.719 CXX test/cpp_headers/lvol.o 00:03:21.719 CXX test/cpp_headers/md5.o 00:03:21.719 CXX test/cpp_headers/memory.o 00:03:21.719 CXX test/cpp_headers/mmio.o 00:03:21.719 CXX test/cpp_headers/nbd.o 00:03:21.719 CXX test/cpp_headers/net.o 00:03:21.719 CXX test/cpp_headers/notify.o 00:03:21.719 CXX test/cpp_headers/nvme.o 00:03:21.977 LINK fdp 00:03:21.977 CXX test/cpp_headers/nvme_intel.o 00:03:21.977 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.977 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.977 CXX test/cpp_headers/nvme_spec.o 00:03:21.977 CXX test/cpp_headers/nvme_zns.o 00:03:21.977 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.977 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.977 CXX test/cpp_headers/nvmf.o 00:03:21.977 CXX test/cpp_headers/nvmf_spec.o 00:03:22.236 CXX test/cpp_headers/nvmf_transport.o 00:03:22.236 CXX test/cpp_headers/opal.o 00:03:22.236 CXX test/cpp_headers/opal_spec.o 00:03:22.236 CXX test/cpp_headers/pci_ids.o 00:03:22.236 CXX test/cpp_headers/pipe.o 00:03:22.236 CXX test/cpp_headers/queue.o 00:03:22.236 CXX test/cpp_headers/reduce.o 00:03:22.236 CXX test/cpp_headers/rpc.o 00:03:22.236 CXX test/cpp_headers/scheduler.o 00:03:22.236 CXX test/cpp_headers/scsi.o 00:03:22.236 CXX test/cpp_headers/scsi_spec.o 00:03:22.494 CXX test/cpp_headers/sock.o 00:03:22.494 CXX test/cpp_headers/stdinc.o 00:03:22.494 LINK bdevperf 00:03:22.494 CXX test/cpp_headers/string.o 00:03:22.494 CXX test/cpp_headers/thread.o 00:03:22.494 CXX test/cpp_headers/trace.o 00:03:22.494 CXX test/cpp_headers/trace_parser.o 00:03:22.494 CXX test/cpp_headers/tree.o 00:03:22.494 CXX test/cpp_headers/ublk.o 00:03:22.494 CXX test/cpp_headers/uuid.o 00:03:22.494 CXX test/cpp_headers/util.o 00:03:22.751 CXX test/cpp_headers/version.o 00:03:22.751 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.751 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.751 CXX test/cpp_headers/vhost.o 00:03:22.751 CXX test/cpp_headers/vmd.o 00:03:22.751 CXX test/cpp_headers/xor.o 00:03:22.751 CXX test/cpp_headers/zipf.o 00:03:23.009 CC examples/nvmf/nvmf/nvmf.o 00:03:23.009 LINK cuse 00:03:23.267 LINK nvmf 00:03:25.801 LINK esnap 00:03:25.801 00:03:25.801 real 1m22.188s 00:03:25.801 user 7m24.976s 00:03:25.801 sys 1m35.390s 00:03:25.801 21:10:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:25.801 21:10:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.801 ************************************ 00:03:25.801 END TEST make 00:03:25.801 ************************************ 00:03:25.801 21:10:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.801 21:10:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.801 21:10:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.801 21:10:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.801 21:10:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.801 21:10:43 -- pm/common@44 -- $ pid=5472 00:03:25.801 21:10:43 -- pm/common@50 -- $ kill -TERM 5472 00:03:25.801 21:10:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.801 21:10:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.801 21:10:43 -- pm/common@44 -- $ pid=5474 00:03:25.801 21:10:43 -- pm/common@50 -- $ kill -TERM 5474 00:03:25.801 21:10:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:25.801 21:10:43 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.060 21:10:43 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:26.060 21:10:43 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:26.060 21:10:43 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:26.060 21:10:44 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:26.060 21:10:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.060 21:10:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.060 21:10:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.060 21:10:44 -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.060 21:10:44 -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.060 21:10:44 -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.060 21:10:44 -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.060 21:10:44 -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.060 21:10:44 -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.060 21:10:44 -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.060 21:10:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.060 21:10:44 -- scripts/common.sh@344 -- # case "$op" in 00:03:26.060 21:10:44 -- scripts/common.sh@345 -- # : 1 00:03:26.060 21:10:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.060 21:10:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.060 21:10:44 -- scripts/common.sh@365 -- # decimal 1 00:03:26.060 21:10:44 -- scripts/common.sh@353 -- # local d=1 00:03:26.060 21:10:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.060 21:10:44 -- scripts/common.sh@355 -- # echo 1 00:03:26.060 21:10:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.060 21:10:44 -- scripts/common.sh@366 -- # decimal 2 00:03:26.060 21:10:44 -- scripts/common.sh@353 -- # local d=2 00:03:26.060 21:10:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.060 21:10:44 -- scripts/common.sh@355 -- # echo 2 00:03:26.060 21:10:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.060 21:10:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.060 21:10:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.060 21:10:44 -- scripts/common.sh@368 -- # return 0 00:03:26.060 21:10:44 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.060 21:10:44 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.060 --rc genhtml_branch_coverage=1 00:03:26.060 --rc genhtml_function_coverage=1 00:03:26.060 --rc genhtml_legend=1 00:03:26.060 --rc geninfo_all_blocks=1 00:03:26.060 --rc geninfo_unexecuted_blocks=1 00:03:26.060 00:03:26.060 ' 00:03:26.060 21:10:44 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.060 --rc genhtml_branch_coverage=1 00:03:26.060 --rc genhtml_function_coverage=1 00:03:26.060 --rc genhtml_legend=1 00:03:26.060 --rc geninfo_all_blocks=1 00:03:26.060 --rc geninfo_unexecuted_blocks=1 00:03:26.060 00:03:26.060 ' 00:03:26.060 21:10:44 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.060 --rc genhtml_branch_coverage=1 00:03:26.060 --rc genhtml_function_coverage=1 00:03:26.060 --rc genhtml_legend=1 00:03:26.060 --rc geninfo_all_blocks=1 00:03:26.060 --rc geninfo_unexecuted_blocks=1 00:03:26.060 00:03:26.060 ' 00:03:26.060 21:10:44 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.060 --rc genhtml_branch_coverage=1 00:03:26.060 --rc genhtml_function_coverage=1 00:03:26.060 --rc genhtml_legend=1 00:03:26.060 --rc geninfo_all_blocks=1 00:03:26.061 --rc geninfo_unexecuted_blocks=1 00:03:26.061 00:03:26.061 ' 00:03:26.061 21:10:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:26.061 21:10:44 -- nvmf/common.sh@7 -- # uname -s 00:03:26.061 21:10:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:26.061 21:10:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:26.061 21:10:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:26.061 21:10:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:26.061 21:10:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:26.061 21:10:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:26.061 21:10:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:26.061 21:10:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:26.061 21:10:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:26.061 21:10:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:26.061 21:10:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eab190e8-05a3-4c07-ae74-fd1981a29539 00:03:26.061 21:10:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=eab190e8-05a3-4c07-ae74-fd1981a29539 00:03:26.061 21:10:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:26.061 21:10:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:26.061 21:10:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:26.061 21:10:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:26.061 21:10:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:26.061 21:10:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:26.061 21:10:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:26.061 21:10:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.061 21:10:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.061 21:10:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.061 21:10:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.061 21:10:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.061 21:10:44 -- paths/export.sh@5 -- # export PATH 00:03:26.061 21:10:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.061 21:10:44 -- nvmf/common.sh@51 -- # : 0 00:03:26.061 21:10:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:26.061 21:10:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:26.061 21:10:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:26.061 21:10:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:26.061 21:10:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:26.061 21:10:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:26.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:26.061 21:10:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:26.061 21:10:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:26.061 21:10:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:26.061 21:10:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:26.061 21:10:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:26.061 21:10:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:26.061 21:10:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:26.061 21:10:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:26.061 21:10:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:26.061 21:10:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:26.061 21:10:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:26.061 21:10:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:26.061 21:10:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:26.061 21:10:44 -- spdk/autotest.sh@48 -- # udevadm_pid=54412 00:03:26.061 21:10:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:26.061 21:10:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:26.061 21:10:44 -- pm/common@17 -- # local monitor 00:03:26.061 21:10:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.061 21:10:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.061 21:10:44 -- pm/common@21 -- # date +%s 00:03:26.061 21:10:44 -- pm/common@21 -- # date +%s 00:03:26.061 21:10:44 -- pm/common@25 -- # sleep 1 00:03:26.061 21:10:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732655444 00:03:26.061 21:10:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732655444 00:03:26.061 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732655444_collect-cpu-load.pm.log 00:03:26.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732655444_collect-vmstat.pm.log 00:03:27.256 21:10:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:27.256 21:10:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:27.256 21:10:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.256 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.256 21:10:45 -- spdk/autotest.sh@59 -- # create_test_list 00:03:27.256 21:10:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:27.256 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.256 21:10:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:27.256 21:10:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:27.256 21:10:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:27.256 21:10:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:27.256 21:10:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:27.256 21:10:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.256 21:10:45 -- common/autotest_common.sh@1457 -- # uname 00:03:27.256 21:10:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:27.256 21:10:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.256 21:10:45 -- common/autotest_common.sh@1477 -- # uname 00:03:27.257 21:10:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:27.257 21:10:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:27.257 21:10:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:27.257 lcov: LCOV version 1.15 00:03:27.257 21:10:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:42.143 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:57.038 21:11:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:57.038 21:11:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.038 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:03:57.038 21:11:14 -- spdk/autotest.sh@78 -- # rm -f 00:03:57.038 21:11:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.982 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:57.982 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:57.982 21:11:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:57.982 21:11:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:57.982 21:11:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:57.982 21:11:15 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:57.982 21:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:57.982 21:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:57.982 21:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:57.982 21:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:57.982 21:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:57.982 21:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:57.982 21:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:57.982 21:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:57.982 21:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:57.982 21:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:57.982 21:11:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:57.982 21:11:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:57.982 21:11:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:57.982 21:11:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.982 21:11:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:57.982 21:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.982 21:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.982 21:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:57.982 21:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:57.982 21:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:57.982 No valid GPT data, bailing 00:03:57.982 21:11:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:57.982 21:11:15 -- scripts/common.sh@394 -- # pt= 00:03:57.982 21:11:15 -- scripts/common.sh@395 -- # return 1 00:03:57.982 21:11:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:57.982 1+0 records in 00:03:57.982 1+0 records out 00:03:57.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636424 s, 165 MB/s 00:03:57.982 21:11:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.982 21:11:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.982 21:11:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:57.982 21:11:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:57.982 21:11:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:57.982 No valid GPT data, bailing 00:03:57.982 21:11:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:57.982 21:11:16 -- scripts/common.sh@394 -- # pt= 00:03:57.982 21:11:16 -- scripts/common.sh@395 -- # return 1 00:03:57.982 21:11:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:57.982 1+0 records in 00:03:57.982 1+0 records out 00:03:57.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049538 s, 212 MB/s 00:03:57.982 21:11:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.982 21:11:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.982 21:11:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:57.982 21:11:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:57.982 21:11:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:58.242 No valid GPT data, bailing 00:03:58.242 21:11:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:58.242 21:11:16 -- scripts/common.sh@394 -- # pt= 00:03:58.242 21:11:16 -- scripts/common.sh@395 -- # return 1 00:03:58.242 21:11:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:58.242 1+0 records in 00:03:58.242 1+0 records out 00:03:58.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062584 s, 168 MB/s 00:03:58.242 21:11:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.242 21:11:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.242 21:11:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:58.243 21:11:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:58.243 21:11:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:58.243 No valid GPT data, bailing 00:03:58.243 21:11:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:58.243 21:11:16 -- scripts/common.sh@394 -- # pt= 00:03:58.243 21:11:16 -- scripts/common.sh@395 -- # return 1 00:03:58.243 21:11:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:58.243 1+0 records in 00:03:58.243 1+0 records out 00:03:58.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520392 s, 201 MB/s 00:03:58.243 21:11:16 -- spdk/autotest.sh@105 -- # sync 00:03:58.243 21:11:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.243 21:11:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.243 21:11:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.535 21:11:19 -- spdk/autotest.sh@111 -- # uname -s 00:04:01.535 21:11:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:01.535 21:11:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:01.535 21:11:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.796 Hugepages 00:04:01.796 node hugesize free / total 00:04:01.796 node0 1048576kB 0 / 0 00:04:01.796 node0 2048kB 0 / 0 00:04:01.796 00:04:01.796 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.796 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:02.056 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:02.056 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:02.056 21:11:20 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.057 21:11:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.057 21:11:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.057 21:11:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.887 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.887 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.887 21:11:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:03.825 21:11:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:03.825 21:11:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:03.825 21:11:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.083 21:11:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:04.083 21:11:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:04.083 21:11:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:04.083 21:11:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.083 21:11:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.084 21:11:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.084 21:11:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:04.084 21:11:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.084 21:11:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.652 Waiting for block devices as requested 00:04:04.652 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.652 21:11:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.652 21:11:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:04.652 21:11:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.652 21:11:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:04.652 21:11:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.652 21:11:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:04.652 21:11:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.912 21:11:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:04.912 21:11:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:04.912 21:11:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:04.912 21:11:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.912 21:11:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.912 21:11:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1543 -- # continue 00:04:04.912 21:11:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:04.912 21:11:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.912 21:11:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:04.912 21:11:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:04.912 21:11:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:04.912 21:11:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:04.912 21:11:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:04.912 21:11:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:04.912 21:11:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:04.912 21:11:22 -- common/autotest_common.sh@1543 -- # continue 00:04:04.912 21:11:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:04.912 21:11:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.912 21:11:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.912 21:11:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:04.912 21:11:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.912 21:11:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.912 21:11:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.909 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.909 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.910 21:11:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.910 21:11:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.910 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.910 21:11:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.910 21:11:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:05.910 21:11:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.910 21:11:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:05.910 21:11:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:05.910 21:11:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:05.910 21:11:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.910 21:11:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:05.910 21:11:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.910 21:11:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.910 21:11:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.910 21:11:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.910 21:11:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.910 21:11:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:05.910 21:11:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.910 21:11:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.910 21:11:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.910 21:11:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:05.910 21:11:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.910 21:11:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:05.910 21:11:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.910 21:11:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:05.910 21:11:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.910 21:11:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:05.910 21:11:24 -- common/autotest_common.sh@1572 -- # return 0 00:04:05.910 21:11:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:05.910 21:11:24 -- common/autotest_common.sh@1580 -- # return 0 00:04:05.910 21:11:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:05.910 21:11:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:05.910 21:11:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.910 21:11:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.910 21:11:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:05.910 21:11:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.910 21:11:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.169 21:11:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:06.169 21:11:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.169 21:11:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.169 21:11:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.169 21:11:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.169 ************************************ 00:04:06.169 START TEST env 00:04:06.169 ************************************ 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.169 * Looking for test storage... 00:04:06.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.169 21:11:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.169 21:11:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.169 21:11:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.169 21:11:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.169 21:11:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.169 21:11:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.169 21:11:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.169 21:11:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.169 21:11:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.169 21:11:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.169 21:11:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.169 21:11:24 env -- scripts/common.sh@344 -- # case "$op" in 00:04:06.169 21:11:24 env -- scripts/common.sh@345 -- # : 1 00:04:06.169 21:11:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.169 21:11:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.169 21:11:24 env -- scripts/common.sh@365 -- # decimal 1 00:04:06.169 21:11:24 env -- scripts/common.sh@353 -- # local d=1 00:04:06.169 21:11:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.169 21:11:24 env -- scripts/common.sh@355 -- # echo 1 00:04:06.169 21:11:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.169 21:11:24 env -- scripts/common.sh@366 -- # decimal 2 00:04:06.169 21:11:24 env -- scripts/common.sh@353 -- # local d=2 00:04:06.169 21:11:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.169 21:11:24 env -- scripts/common.sh@355 -- # echo 2 00:04:06.169 21:11:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.169 21:11:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.169 21:11:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.169 21:11:24 env -- scripts/common.sh@368 -- # return 0 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.169 --rc genhtml_branch_coverage=1 00:04:06.169 --rc genhtml_function_coverage=1 00:04:06.169 --rc genhtml_legend=1 00:04:06.169 --rc geninfo_all_blocks=1 00:04:06.169 --rc geninfo_unexecuted_blocks=1 00:04:06.169 00:04:06.169 ' 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.169 --rc genhtml_branch_coverage=1 00:04:06.169 --rc genhtml_function_coverage=1 00:04:06.169 --rc genhtml_legend=1 00:04:06.169 --rc geninfo_all_blocks=1 00:04:06.169 --rc geninfo_unexecuted_blocks=1 00:04:06.169 00:04:06.169 ' 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.169 --rc genhtml_branch_coverage=1 00:04:06.169 --rc genhtml_function_coverage=1 00:04:06.169 --rc genhtml_legend=1 00:04:06.169 --rc geninfo_all_blocks=1 00:04:06.169 --rc geninfo_unexecuted_blocks=1 00:04:06.169 00:04:06.169 ' 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.169 --rc genhtml_branch_coverage=1 00:04:06.169 --rc genhtml_function_coverage=1 00:04:06.169 --rc genhtml_legend=1 00:04:06.169 --rc geninfo_all_blocks=1 00:04:06.169 --rc geninfo_unexecuted_blocks=1 00:04:06.169 00:04:06.169 ' 00:04:06.169 21:11:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.169 21:11:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.169 21:11:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.428 ************************************ 00:04:06.428 START TEST env_memory 00:04:06.428 ************************************ 00:04:06.428 21:11:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.428 00:04:06.428 00:04:06.428 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.428 http://cunit.sourceforge.net/ 00:04:06.428 00:04:06.428 00:04:06.428 Suite: memory 00:04:06.428 Test: alloc and free memory map ...[2024-11-26 21:11:24.393777] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.428 passed 00:04:06.428 Test: mem map translation ...[2024-11-26 21:11:24.437114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.428 [2024-11-26 21:11:24.437178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.428 [2024-11-26 21:11:24.437237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.428 [2024-11-26 21:11:24.437270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.428 passed 00:04:06.428 Test: mem map registration ...[2024-11-26 21:11:24.503136] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:06.428 [2024-11-26 21:11:24.503181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:06.428 passed 00:04:06.687 Test: mem map adjacent registrations ...passed 00:04:06.687 00:04:06.687 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.687 suites 1 1 n/a 0 0 00:04:06.687 tests 4 4 4 0 0 00:04:06.687 asserts 152 152 152 0 n/a 00:04:06.687 00:04:06.687 Elapsed time = 0.238 seconds 00:04:06.687 00:04:06.687 real 0m0.289s 00:04:06.687 user 0m0.255s 00:04:06.687 sys 0m0.024s 00:04:06.687 21:11:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.687 21:11:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.687 ************************************ 00:04:06.687 END TEST env_memory 00:04:06.687 ************************************ 00:04:06.687 21:11:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.687 21:11:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.687 21:11:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.687 21:11:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.687 ************************************ 00:04:06.687 START TEST env_vtophys 00:04:06.687 ************************************ 00:04:06.687 21:11:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.687 EAL: lib.eal log level changed from notice to debug 00:04:06.687 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 1 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 2 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 3 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 4 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 5 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 6 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 7 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 8 as core 0 on socket 0 00:04:06.687 EAL: Detected lcore 9 as core 0 on socket 0 00:04:06.687 EAL: Maximum logical cores by configuration: 128 00:04:06.687 EAL: Detected CPU lcores: 10 00:04:06.687 EAL: Detected NUMA nodes: 1 00:04:06.687 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:06.687 EAL: Detected shared linkage of DPDK 00:04:06.687 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.687 EAL: Selected IOVA mode 'PA' 00:04:06.687 EAL: Probing VFIO support... 00:04:06.687 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.688 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:06.688 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.688 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.688 EAL: Setting up physically contiguous memory... 00:04:06.688 EAL: Setting maximum number of open files to 524288 00:04:06.688 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.688 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.688 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.688 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.688 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.688 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.688 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.688 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.688 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.688 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.688 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.688 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.688 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.688 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.688 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.688 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.688 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.688 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.688 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.688 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.688 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.688 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.688 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.688 EAL: Hugepages will be freed exactly as allocated. 00:04:06.688 EAL: No shared files mode enabled, IPC is disabled 00:04:06.688 EAL: No shared files mode enabled, IPC is disabled 00:04:06.947 EAL: TSC frequency is ~2290000 KHz 00:04:06.947 EAL: Main lcore 0 is ready (tid=7f999bc1ba40;cpuset=[0]) 00:04:06.947 EAL: Trying to obtain current memory policy. 00:04:06.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.948 EAL: Restoring previous memory policy: 0 00:04:06.948 EAL: request: mp_malloc_sync 00:04:06.948 EAL: No shared files mode enabled, IPC is disabled 00:04:06.948 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.948 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.948 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:06.948 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.948 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:06.948 00:04:06.948 00:04:06.948 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.948 http://cunit.sourceforge.net/ 00:04:06.948 00:04:06.948 00:04:06.948 Suite: components_suite 00:04:07.207 Test: vtophys_malloc_test ...passed 00:04:07.207 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.207 EAL: Restoring previous memory policy: 4 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.207 EAL: Trying to obtain current memory policy. 00:04:07.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.207 EAL: Restoring previous memory policy: 4 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.207 EAL: Trying to obtain current memory policy. 00:04:07.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.207 EAL: Restoring previous memory policy: 4 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.207 EAL: Trying to obtain current memory policy. 00:04:07.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.207 EAL: Restoring previous memory policy: 4 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.467 EAL: request: mp_malloc_sync 00:04:07.467 EAL: No shared files mode enabled, IPC is disabled 00:04:07.467 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.467 EAL: Trying to obtain current memory policy. 00:04:07.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.467 EAL: Restoring previous memory policy: 4 00:04:07.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.467 EAL: request: mp_malloc_sync 00:04:07.467 EAL: No shared files mode enabled, IPC is disabled 00:04:07.467 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.467 EAL: request: mp_malloc_sync 00:04:07.467 EAL: No shared files mode enabled, IPC is disabled 00:04:07.467 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.467 EAL: Trying to obtain current memory policy. 00:04:07.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.467 EAL: Restoring previous memory policy: 4 00:04:07.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.467 EAL: request: mp_malloc_sync 00:04:07.467 EAL: No shared files mode enabled, IPC is disabled 00:04:07.467 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.727 EAL: request: mp_malloc_sync 00:04:07.727 EAL: No shared files mode enabled, IPC is disabled 00:04:07.727 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.727 EAL: Trying to obtain current memory policy. 00:04:07.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.727 EAL: Restoring previous memory policy: 4 00:04:07.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.727 EAL: request: mp_malloc_sync 00:04:07.727 EAL: No shared files mode enabled, IPC is disabled 00:04:07.727 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.988 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.988 EAL: request: mp_malloc_sync 00:04:07.988 EAL: No shared files mode enabled, IPC is disabled 00:04:07.988 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.249 EAL: Trying to obtain current memory policy. 00:04:08.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.249 EAL: Restoring previous memory policy: 4 00:04:08.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.249 EAL: request: mp_malloc_sync 00:04:08.249 EAL: No shared files mode enabled, IPC is disabled 00:04:08.249 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.819 EAL: request: mp_malloc_sync 00:04:08.819 EAL: No shared files mode enabled, IPC is disabled 00:04:08.819 EAL: Heap on socket 0 was shrunk by 258MB 00:04:09.390 EAL: Trying to obtain current memory policy. 00:04:09.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.390 EAL: Restoring previous memory policy: 4 00:04:09.390 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.390 EAL: request: mp_malloc_sync 00:04:09.390 EAL: No shared files mode enabled, IPC is disabled 00:04:09.390 EAL: Heap on socket 0 was expanded by 514MB 00:04:10.328 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.328 EAL: request: mp_malloc_sync 00:04:10.328 EAL: No shared files mode enabled, IPC is disabled 00:04:10.328 EAL: Heap on socket 0 was shrunk by 514MB 00:04:11.267 EAL: Trying to obtain current memory policy. 00:04:11.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.526 EAL: Restoring previous memory policy: 4 00:04:11.526 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.526 EAL: request: mp_malloc_sync 00:04:11.526 EAL: No shared files mode enabled, IPC is disabled 00:04:11.526 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.430 EAL: request: mp_malloc_sync 00:04:13.430 EAL: No shared files mode enabled, IPC is disabled 00:04:13.430 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:15.332 passed 00:04:15.332 00:04:15.332 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.332 suites 1 1 n/a 0 0 00:04:15.332 tests 2 2 2 0 0 00:04:15.332 asserts 5824 5824 5824 0 n/a 00:04:15.332 00:04:15.332 Elapsed time = 8.268 seconds 00:04:15.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.332 EAL: request: mp_malloc_sync 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 EAL: Heap on socket 0 was shrunk by 2MB 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 EAL: No shared files mode enabled, IPC is disabled 00:04:15.332 00:04:15.332 real 0m8.604s 00:04:15.332 user 0m7.592s 00:04:15.332 sys 0m0.853s 00:04:15.332 21:11:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.332 21:11:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:15.332 ************************************ 00:04:15.332 END TEST env_vtophys 00:04:15.332 ************************************ 00:04:15.332 21:11:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.332 21:11:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.332 21:11:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.333 21:11:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.333 ************************************ 00:04:15.333 START TEST env_pci 00:04:15.333 ************************************ 00:04:15.333 21:11:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.333 00:04:15.333 00:04:15.333 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.333 http://cunit.sourceforge.net/ 00:04:15.333 00:04:15.333 00:04:15.333 Suite: pci 00:04:15.333 Test: pci_hook ...[2024-11-26 21:11:33.388244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56726 has claimed it 00:04:15.333 passed 00:04:15.333 00:04:15.333 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.333 suites 1 1 n/a 0 0 00:04:15.333 tests 1 1 1 0 0 00:04:15.333 asserts 25 25 25 0 n/a 00:04:15.333 00:04:15.333 Elapsed time = 0.006EAL: Cannot find device (10000:00:01.0) 00:04:15.333 EAL: Failed to attach device on primary process 00:04:15.333 seconds 00:04:15.333 00:04:15.333 real 0m0.103s 00:04:15.333 user 0m0.049s 00:04:15.333 sys 0m0.053s 00:04:15.333 21:11:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.333 21:11:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:15.333 ************************************ 00:04:15.333 END TEST env_pci 00:04:15.333 ************************************ 00:04:15.593 21:11:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:15.593 21:11:33 env -- env/env.sh@15 -- # uname 00:04:15.593 21:11:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:15.593 21:11:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:15.593 21:11:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.593 21:11:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:15.593 21:11:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.593 21:11:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.593 ************************************ 00:04:15.593 START TEST env_dpdk_post_init 00:04:15.593 ************************************ 00:04:15.593 21:11:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.593 EAL: Detected CPU lcores: 10 00:04:15.593 EAL: Detected NUMA nodes: 1 00:04:15.593 EAL: Detected shared linkage of DPDK 00:04:15.593 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.593 EAL: Selected IOVA mode 'PA' 00:04:15.593 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.852 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:15.852 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:15.852 Starting DPDK initialization... 00:04:15.852 Starting SPDK post initialization... 00:04:15.852 SPDK NVMe probe 00:04:15.852 Attaching to 0000:00:10.0 00:04:15.852 Attaching to 0000:00:11.0 00:04:15.852 Attached to 0000:00:10.0 00:04:15.852 Attached to 0000:00:11.0 00:04:15.852 Cleaning up... 00:04:15.852 00:04:15.852 real 0m0.296s 00:04:15.852 user 0m0.101s 00:04:15.852 sys 0m0.094s 00:04:15.852 21:11:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.852 21:11:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.852 ************************************ 00:04:15.852 END TEST env_dpdk_post_init 00:04:15.852 ************************************ 00:04:15.852 21:11:33 env -- env/env.sh@26 -- # uname 00:04:15.852 21:11:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:15.852 21:11:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.852 21:11:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.852 21:11:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.852 21:11:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.852 ************************************ 00:04:15.852 START TEST env_mem_callbacks 00:04:15.852 ************************************ 00:04:15.852 21:11:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.852 EAL: Detected CPU lcores: 10 00:04:15.853 EAL: Detected NUMA nodes: 1 00:04:15.853 EAL: Detected shared linkage of DPDK 00:04:15.853 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.853 EAL: Selected IOVA mode 'PA' 00:04:16.183 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.183 00:04:16.183 00:04:16.183 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.183 http://cunit.sourceforge.net/ 00:04:16.183 00:04:16.183 00:04:16.183 Suite: memory 00:04:16.183 Test: test ... 00:04:16.183 register 0x200000200000 2097152 00:04:16.183 malloc 3145728 00:04:16.183 register 0x200000400000 4194304 00:04:16.183 buf 0x2000004fffc0 len 3145728 PASSED 00:04:16.183 malloc 64 00:04:16.183 buf 0x2000004ffec0 len 64 PASSED 00:04:16.183 malloc 4194304 00:04:16.183 register 0x200000800000 6291456 00:04:16.183 buf 0x2000009fffc0 len 4194304 PASSED 00:04:16.183 free 0x2000004fffc0 3145728 00:04:16.183 free 0x2000004ffec0 64 00:04:16.183 unregister 0x200000400000 4194304 PASSED 00:04:16.183 free 0x2000009fffc0 4194304 00:04:16.183 unregister 0x200000800000 6291456 PASSED 00:04:16.183 malloc 8388608 00:04:16.183 register 0x200000400000 10485760 00:04:16.183 buf 0x2000005fffc0 len 8388608 PASSED 00:04:16.183 free 0x2000005fffc0 8388608 00:04:16.183 unregister 0x200000400000 10485760 PASSED 00:04:16.183 passed 00:04:16.183 00:04:16.183 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.183 suites 1 1 n/a 0 0 00:04:16.183 tests 1 1 1 0 0 00:04:16.183 asserts 15 15 15 0 n/a 00:04:16.183 00:04:16.183 Elapsed time = 0.086 seconds 00:04:16.183 00:04:16.183 real 0m0.291s 00:04:16.183 user 0m0.123s 00:04:16.183 sys 0m0.064s 00:04:16.183 21:11:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.183 21:11:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:16.183 ************************************ 00:04:16.183 END TEST env_mem_callbacks 00:04:16.183 ************************************ 00:04:16.442 00:04:16.443 real 0m10.150s 00:04:16.443 user 0m8.350s 00:04:16.443 sys 0m1.446s 00:04:16.443 ************************************ 00:04:16.443 END TEST env 00:04:16.443 ************************************ 00:04:16.443 21:11:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.443 21:11:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.443 21:11:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:16.443 21:11:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.443 21:11:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.443 21:11:34 -- common/autotest_common.sh@10 -- # set +x 00:04:16.443 ************************************ 00:04:16.443 START TEST rpc 00:04:16.443 ************************************ 00:04:16.443 21:11:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:16.443 * Looking for test storage... 00:04:16.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.443 21:11:34 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.443 21:11:34 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.443 21:11:34 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.443 21:11:34 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.443 21:11:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.443 21:11:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.443 21:11:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.443 21:11:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.443 21:11:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.443 21:11:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.443 21:11:34 rpc -- scripts/common.sh@345 -- # : 1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.443 21:11:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.443 21:11:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.443 21:11:34 rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.443 21:11:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.443 21:11:34 rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.443 21:11:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.702 21:11:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.702 21:11:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.702 21:11:34 rpc -- scripts/common.sh@368 -- # return 0 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.702 --rc genhtml_branch_coverage=1 00:04:16.702 --rc genhtml_function_coverage=1 00:04:16.702 --rc genhtml_legend=1 00:04:16.702 --rc geninfo_all_blocks=1 00:04:16.702 --rc geninfo_unexecuted_blocks=1 00:04:16.702 00:04:16.702 ' 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.702 --rc genhtml_branch_coverage=1 00:04:16.702 --rc genhtml_function_coverage=1 00:04:16.702 --rc genhtml_legend=1 00:04:16.702 --rc geninfo_all_blocks=1 00:04:16.702 --rc geninfo_unexecuted_blocks=1 00:04:16.702 00:04:16.702 ' 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.702 --rc genhtml_branch_coverage=1 00:04:16.702 --rc genhtml_function_coverage=1 00:04:16.702 --rc genhtml_legend=1 00:04:16.702 --rc geninfo_all_blocks=1 00:04:16.702 --rc geninfo_unexecuted_blocks=1 00:04:16.702 00:04:16.702 ' 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.702 --rc genhtml_branch_coverage=1 00:04:16.702 --rc genhtml_function_coverage=1 00:04:16.702 --rc genhtml_legend=1 00:04:16.702 --rc geninfo_all_blocks=1 00:04:16.702 --rc geninfo_unexecuted_blocks=1 00:04:16.702 00:04:16.702 ' 00:04:16.702 21:11:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56853 00:04:16.702 21:11:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:16.702 21:11:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.702 21:11:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56853 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 56853 ']' 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.702 21:11:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.703 21:11:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.703 21:11:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.703 [2024-11-26 21:11:34.717313] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:16.703 [2024-11-26 21:11:34.717541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56853 ] 00:04:16.961 [2024-11-26 21:11:34.896387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.961 [2024-11-26 21:11:35.021382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:16.961 [2024-11-26 21:11:35.021456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56853' to capture a snapshot of events at runtime. 00:04:16.961 [2024-11-26 21:11:35.021471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:16.961 [2024-11-26 21:11:35.021482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:16.961 [2024-11-26 21:11:35.021490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56853 for offline analysis/debug. 00:04:16.961 [2024-11-26 21:11:35.022733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.900 21:11:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.900 21:11:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:17.900 21:11:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.900 21:11:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.900 21:11:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:17.900 21:11:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:17.900 21:11:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.900 21:11:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.900 21:11:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.900 ************************************ 00:04:17.900 START TEST rpc_integrity 00:04:17.900 ************************************ 00:04:17.900 21:11:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.900 21:11:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.900 21:11:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.900 21:11:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.900 21:11:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.900 21:11:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.900 21:11:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.900 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.900 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.900 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.900 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.900 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.900 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:17.900 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.900 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.900 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.160 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.160 { 00:04:18.160 "name": "Malloc0", 00:04:18.160 "aliases": [ 00:04:18.160 "0c8b6511-d551-4b45-ac74-305c85bfce83" 00:04:18.160 ], 00:04:18.160 "product_name": "Malloc disk", 00:04:18.160 "block_size": 512, 00:04:18.160 "num_blocks": 16384, 00:04:18.160 "uuid": "0c8b6511-d551-4b45-ac74-305c85bfce83", 00:04:18.160 "assigned_rate_limits": { 00:04:18.160 "rw_ios_per_sec": 0, 00:04:18.160 "rw_mbytes_per_sec": 0, 00:04:18.160 "r_mbytes_per_sec": 0, 00:04:18.160 "w_mbytes_per_sec": 0 00:04:18.160 }, 00:04:18.160 "claimed": false, 00:04:18.160 "zoned": false, 00:04:18.160 "supported_io_types": { 00:04:18.160 "read": true, 00:04:18.160 "write": true, 00:04:18.160 "unmap": true, 00:04:18.160 "flush": true, 00:04:18.160 "reset": true, 00:04:18.160 "nvme_admin": false, 00:04:18.160 "nvme_io": false, 00:04:18.160 "nvme_io_md": false, 00:04:18.160 "write_zeroes": true, 00:04:18.160 "zcopy": true, 00:04:18.160 "get_zone_info": false, 00:04:18.160 "zone_management": false, 00:04:18.160 "zone_append": false, 00:04:18.160 "compare": false, 00:04:18.160 "compare_and_write": false, 00:04:18.160 "abort": true, 00:04:18.160 "seek_hole": false, 00:04:18.160 "seek_data": false, 00:04:18.160 "copy": true, 00:04:18.160 "nvme_iov_md": false 00:04:18.160 }, 00:04:18.160 "memory_domains": [ 00:04:18.160 { 00:04:18.160 "dma_device_id": "system", 00:04:18.160 "dma_device_type": 1 00:04:18.160 }, 00:04:18.160 { 00:04:18.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.160 "dma_device_type": 2 00:04:18.160 } 00:04:18.160 ], 00:04:18.160 "driver_specific": {} 00:04:18.160 } 00:04:18.160 ]' 00:04:18.160 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.160 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.160 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 [2024-11-26 21:11:36.120390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:18.160 [2024-11-26 21:11:36.120490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.160 [2024-11-26 21:11:36.120518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:18.160 [2024-11-26 21:11:36.120534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.160 [2024-11-26 21:11:36.123178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.160 [2024-11-26 21:11:36.123291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.160 Passthru0 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.160 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.161 { 00:04:18.161 "name": "Malloc0", 00:04:18.161 "aliases": [ 00:04:18.161 "0c8b6511-d551-4b45-ac74-305c85bfce83" 00:04:18.161 ], 00:04:18.161 "product_name": "Malloc disk", 00:04:18.161 "block_size": 512, 00:04:18.161 "num_blocks": 16384, 00:04:18.161 "uuid": "0c8b6511-d551-4b45-ac74-305c85bfce83", 00:04:18.161 "assigned_rate_limits": { 00:04:18.161 "rw_ios_per_sec": 0, 00:04:18.161 "rw_mbytes_per_sec": 0, 00:04:18.161 "r_mbytes_per_sec": 0, 00:04:18.161 "w_mbytes_per_sec": 0 00:04:18.161 }, 00:04:18.161 "claimed": true, 00:04:18.161 "claim_type": "exclusive_write", 00:04:18.161 "zoned": false, 00:04:18.161 "supported_io_types": { 00:04:18.161 "read": true, 00:04:18.161 "write": true, 00:04:18.161 "unmap": true, 00:04:18.161 "flush": true, 00:04:18.161 "reset": true, 00:04:18.161 "nvme_admin": false, 00:04:18.161 "nvme_io": false, 00:04:18.161 "nvme_io_md": false, 00:04:18.161 "write_zeroes": true, 00:04:18.161 "zcopy": true, 00:04:18.161 "get_zone_info": false, 00:04:18.161 "zone_management": false, 00:04:18.161 "zone_append": false, 00:04:18.161 "compare": false, 00:04:18.161 "compare_and_write": false, 00:04:18.161 "abort": true, 00:04:18.161 "seek_hole": false, 00:04:18.161 "seek_data": false, 00:04:18.161 "copy": true, 00:04:18.161 "nvme_iov_md": false 00:04:18.161 }, 00:04:18.161 "memory_domains": [ 00:04:18.161 { 00:04:18.161 "dma_device_id": "system", 00:04:18.161 "dma_device_type": 1 00:04:18.161 }, 00:04:18.161 { 00:04:18.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.161 "dma_device_type": 2 00:04:18.161 } 00:04:18.161 ], 00:04:18.161 "driver_specific": {} 00:04:18.161 }, 00:04:18.161 { 00:04:18.161 "name": "Passthru0", 00:04:18.161 "aliases": [ 00:04:18.161 "144d87a5-2819-5f15-aee5-fdecc8fe2fa9" 00:04:18.161 ], 00:04:18.161 "product_name": "passthru", 00:04:18.161 "block_size": 512, 00:04:18.161 "num_blocks": 16384, 00:04:18.161 "uuid": "144d87a5-2819-5f15-aee5-fdecc8fe2fa9", 00:04:18.161 "assigned_rate_limits": { 00:04:18.161 "rw_ios_per_sec": 0, 00:04:18.161 "rw_mbytes_per_sec": 0, 00:04:18.161 "r_mbytes_per_sec": 0, 00:04:18.161 "w_mbytes_per_sec": 0 00:04:18.161 }, 00:04:18.161 "claimed": false, 00:04:18.161 "zoned": false, 00:04:18.161 "supported_io_types": { 00:04:18.161 "read": true, 00:04:18.161 "write": true, 00:04:18.161 "unmap": true, 00:04:18.161 "flush": true, 00:04:18.161 "reset": true, 00:04:18.161 "nvme_admin": false, 00:04:18.161 "nvme_io": false, 00:04:18.161 "nvme_io_md": false, 00:04:18.161 "write_zeroes": true, 00:04:18.161 "zcopy": true, 00:04:18.161 "get_zone_info": false, 00:04:18.161 "zone_management": false, 00:04:18.161 "zone_append": false, 00:04:18.161 "compare": false, 00:04:18.161 "compare_and_write": false, 00:04:18.161 "abort": true, 00:04:18.161 "seek_hole": false, 00:04:18.161 "seek_data": false, 00:04:18.161 "copy": true, 00:04:18.161 "nvme_iov_md": false 00:04:18.161 }, 00:04:18.161 "memory_domains": [ 00:04:18.161 { 00:04:18.161 "dma_device_id": "system", 00:04:18.161 "dma_device_type": 1 00:04:18.161 }, 00:04:18.161 { 00:04:18.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.161 "dma_device_type": 2 00:04:18.161 } 00:04:18.161 ], 00:04:18.161 "driver_specific": { 00:04:18.161 "passthru": { 00:04:18.161 "name": "Passthru0", 00:04:18.161 "base_bdev_name": "Malloc0" 00:04:18.161 } 00:04:18.161 } 00:04:18.161 } 00:04:18.161 ]' 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.161 ************************************ 00:04:18.161 END TEST rpc_integrity 00:04:18.161 ************************************ 00:04:18.161 21:11:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.161 00:04:18.161 real 0m0.333s 00:04:18.161 user 0m0.175s 00:04:18.161 sys 0m0.051s 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.161 21:11:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 21:11:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:18.422 21:11:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.422 21:11:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.422 21:11:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 ************************************ 00:04:18.422 START TEST rpc_plugins 00:04:18.422 ************************************ 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:18.422 { 00:04:18.422 "name": "Malloc1", 00:04:18.422 "aliases": [ 00:04:18.422 "f59d70a7-a6b9-4c85-9996-cdbbd56fbcc0" 00:04:18.422 ], 00:04:18.422 "product_name": "Malloc disk", 00:04:18.422 "block_size": 4096, 00:04:18.422 "num_blocks": 256, 00:04:18.422 "uuid": "f59d70a7-a6b9-4c85-9996-cdbbd56fbcc0", 00:04:18.422 "assigned_rate_limits": { 00:04:18.422 "rw_ios_per_sec": 0, 00:04:18.422 "rw_mbytes_per_sec": 0, 00:04:18.422 "r_mbytes_per_sec": 0, 00:04:18.422 "w_mbytes_per_sec": 0 00:04:18.422 }, 00:04:18.422 "claimed": false, 00:04:18.422 "zoned": false, 00:04:18.422 "supported_io_types": { 00:04:18.422 "read": true, 00:04:18.422 "write": true, 00:04:18.422 "unmap": true, 00:04:18.422 "flush": true, 00:04:18.422 "reset": true, 00:04:18.422 "nvme_admin": false, 00:04:18.422 "nvme_io": false, 00:04:18.422 "nvme_io_md": false, 00:04:18.422 "write_zeroes": true, 00:04:18.422 "zcopy": true, 00:04:18.422 "get_zone_info": false, 00:04:18.422 "zone_management": false, 00:04:18.422 "zone_append": false, 00:04:18.422 "compare": false, 00:04:18.422 "compare_and_write": false, 00:04:18.422 "abort": true, 00:04:18.422 "seek_hole": false, 00:04:18.422 "seek_data": false, 00:04:18.422 "copy": true, 00:04:18.422 "nvme_iov_md": false 00:04:18.422 }, 00:04:18.422 "memory_domains": [ 00:04:18.422 { 00:04:18.422 "dma_device_id": "system", 00:04:18.422 "dma_device_type": 1 00:04:18.422 }, 00:04:18.422 { 00:04:18.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.422 "dma_device_type": 2 00:04:18.422 } 00:04:18.422 ], 00:04:18.422 "driver_specific": {} 00:04:18.422 } 00:04:18.422 ]' 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:18.422 ************************************ 00:04:18.422 END TEST rpc_plugins 00:04:18.422 ************************************ 00:04:18.422 21:11:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:18.422 00:04:18.422 real 0m0.165s 00:04:18.422 user 0m0.088s 00:04:18.422 sys 0m0.024s 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.422 21:11:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 21:11:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:18.422 21:11:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.422 21:11:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.422 21:11:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.422 ************************************ 00:04:18.422 START TEST rpc_trace_cmd_test 00:04:18.422 ************************************ 00:04:18.422 21:11:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:18.422 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:18.422 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.422 21:11:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.422 21:11:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:18.682 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56853", 00:04:18.682 "tpoint_group_mask": "0x8", 00:04:18.682 "iscsi_conn": { 00:04:18.682 "mask": "0x2", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "scsi": { 00:04:18.682 "mask": "0x4", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "bdev": { 00:04:18.682 "mask": "0x8", 00:04:18.682 "tpoint_mask": "0xffffffffffffffff" 00:04:18.682 }, 00:04:18.682 "nvmf_rdma": { 00:04:18.682 "mask": "0x10", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "nvmf_tcp": { 00:04:18.682 "mask": "0x20", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "ftl": { 00:04:18.682 "mask": "0x40", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "blobfs": { 00:04:18.682 "mask": "0x80", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "dsa": { 00:04:18.682 "mask": "0x200", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "thread": { 00:04:18.682 "mask": "0x400", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "nvme_pcie": { 00:04:18.682 "mask": "0x800", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "iaa": { 00:04:18.682 "mask": "0x1000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "nvme_tcp": { 00:04:18.682 "mask": "0x2000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "bdev_nvme": { 00:04:18.682 "mask": "0x4000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "sock": { 00:04:18.682 "mask": "0x8000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "blob": { 00:04:18.682 "mask": "0x10000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "bdev_raid": { 00:04:18.682 "mask": "0x20000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 }, 00:04:18.682 "scheduler": { 00:04:18.682 "mask": "0x40000", 00:04:18.682 "tpoint_mask": "0x0" 00:04:18.682 } 00:04:18.682 }' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:18.682 ************************************ 00:04:18.682 END TEST rpc_trace_cmd_test 00:04:18.682 ************************************ 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:18.682 00:04:18.682 real 0m0.244s 00:04:18.682 user 0m0.201s 00:04:18.682 sys 0m0.030s 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.682 21:11:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 21:11:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:18.942 21:11:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:18.942 21:11:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:18.942 21:11:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.942 21:11:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.942 21:11:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 ************************************ 00:04:18.942 START TEST rpc_daemon_integrity 00:04:18.942 ************************************ 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.942 { 00:04:18.942 "name": "Malloc2", 00:04:18.942 "aliases": [ 00:04:18.942 "514c2206-f5a2-4fd5-ba80-6861e61a8fa2" 00:04:18.942 ], 00:04:18.942 "product_name": "Malloc disk", 00:04:18.942 "block_size": 512, 00:04:18.942 "num_blocks": 16384, 00:04:18.942 "uuid": "514c2206-f5a2-4fd5-ba80-6861e61a8fa2", 00:04:18.942 "assigned_rate_limits": { 00:04:18.942 "rw_ios_per_sec": 0, 00:04:18.942 "rw_mbytes_per_sec": 0, 00:04:18.942 "r_mbytes_per_sec": 0, 00:04:18.942 "w_mbytes_per_sec": 0 00:04:18.942 }, 00:04:18.942 "claimed": false, 00:04:18.942 "zoned": false, 00:04:18.942 "supported_io_types": { 00:04:18.942 "read": true, 00:04:18.942 "write": true, 00:04:18.942 "unmap": true, 00:04:18.942 "flush": true, 00:04:18.942 "reset": true, 00:04:18.942 "nvme_admin": false, 00:04:18.942 "nvme_io": false, 00:04:18.942 "nvme_io_md": false, 00:04:18.942 "write_zeroes": true, 00:04:18.942 "zcopy": true, 00:04:18.942 "get_zone_info": false, 00:04:18.942 "zone_management": false, 00:04:18.942 "zone_append": false, 00:04:18.942 "compare": false, 00:04:18.942 "compare_and_write": false, 00:04:18.942 "abort": true, 00:04:18.942 "seek_hole": false, 00:04:18.942 "seek_data": false, 00:04:18.942 "copy": true, 00:04:18.942 "nvme_iov_md": false 00:04:18.942 }, 00:04:18.942 "memory_domains": [ 00:04:18.942 { 00:04:18.942 "dma_device_id": "system", 00:04:18.942 "dma_device_type": 1 00:04:18.942 }, 00:04:18.942 { 00:04:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.942 "dma_device_type": 2 00:04:18.942 } 00:04:18.942 ], 00:04:18.942 "driver_specific": {} 00:04:18.942 } 00:04:18.942 ]' 00:04:18.942 21:11:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 [2024-11-26 21:11:37.031761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:18.942 [2024-11-26 21:11:37.031845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.942 [2024-11-26 21:11:37.031869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:18.942 [2024-11-26 21:11:37.031881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.942 [2024-11-26 21:11:37.034412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.942 [2024-11-26 21:11:37.034466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.942 Passthru0 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.942 { 00:04:18.942 "name": "Malloc2", 00:04:18.942 "aliases": [ 00:04:18.942 "514c2206-f5a2-4fd5-ba80-6861e61a8fa2" 00:04:18.942 ], 00:04:18.942 "product_name": "Malloc disk", 00:04:18.942 "block_size": 512, 00:04:18.942 "num_blocks": 16384, 00:04:18.942 "uuid": "514c2206-f5a2-4fd5-ba80-6861e61a8fa2", 00:04:18.942 "assigned_rate_limits": { 00:04:18.942 "rw_ios_per_sec": 0, 00:04:18.942 "rw_mbytes_per_sec": 0, 00:04:18.942 "r_mbytes_per_sec": 0, 00:04:18.942 "w_mbytes_per_sec": 0 00:04:18.942 }, 00:04:18.942 "claimed": true, 00:04:18.942 "claim_type": "exclusive_write", 00:04:18.942 "zoned": false, 00:04:18.942 "supported_io_types": { 00:04:18.942 "read": true, 00:04:18.942 "write": true, 00:04:18.942 "unmap": true, 00:04:18.942 "flush": true, 00:04:18.942 "reset": true, 00:04:18.942 "nvme_admin": false, 00:04:18.942 "nvme_io": false, 00:04:18.942 "nvme_io_md": false, 00:04:18.942 "write_zeroes": true, 00:04:18.942 "zcopy": true, 00:04:18.942 "get_zone_info": false, 00:04:18.942 "zone_management": false, 00:04:18.942 "zone_append": false, 00:04:18.942 "compare": false, 00:04:18.942 "compare_and_write": false, 00:04:18.942 "abort": true, 00:04:18.942 "seek_hole": false, 00:04:18.942 "seek_data": false, 00:04:18.942 "copy": true, 00:04:18.942 "nvme_iov_md": false 00:04:18.942 }, 00:04:18.942 "memory_domains": [ 00:04:18.942 { 00:04:18.942 "dma_device_id": "system", 00:04:18.942 "dma_device_type": 1 00:04:18.942 }, 00:04:18.942 { 00:04:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.942 "dma_device_type": 2 00:04:18.942 } 00:04:18.942 ], 00:04:18.942 "driver_specific": {} 00:04:18.942 }, 00:04:18.942 { 00:04:18.942 "name": "Passthru0", 00:04:18.942 "aliases": [ 00:04:18.942 "c84a60b0-b010-5fcc-85b3-7dd13c5a9915" 00:04:18.942 ], 00:04:18.942 "product_name": "passthru", 00:04:18.942 "block_size": 512, 00:04:18.942 "num_blocks": 16384, 00:04:18.942 "uuid": "c84a60b0-b010-5fcc-85b3-7dd13c5a9915", 00:04:18.942 "assigned_rate_limits": { 00:04:18.942 "rw_ios_per_sec": 0, 00:04:18.942 "rw_mbytes_per_sec": 0, 00:04:18.942 "r_mbytes_per_sec": 0, 00:04:18.942 "w_mbytes_per_sec": 0 00:04:18.942 }, 00:04:18.942 "claimed": false, 00:04:18.942 "zoned": false, 00:04:18.942 "supported_io_types": { 00:04:18.942 "read": true, 00:04:18.942 "write": true, 00:04:18.942 "unmap": true, 00:04:18.942 "flush": true, 00:04:18.942 "reset": true, 00:04:18.942 "nvme_admin": false, 00:04:18.942 "nvme_io": false, 00:04:18.942 "nvme_io_md": false, 00:04:18.942 "write_zeroes": true, 00:04:18.942 "zcopy": true, 00:04:18.942 "get_zone_info": false, 00:04:18.942 "zone_management": false, 00:04:18.942 "zone_append": false, 00:04:18.942 "compare": false, 00:04:18.942 "compare_and_write": false, 00:04:18.942 "abort": true, 00:04:18.942 "seek_hole": false, 00:04:18.942 "seek_data": false, 00:04:18.942 "copy": true, 00:04:18.942 "nvme_iov_md": false 00:04:18.942 }, 00:04:18.942 "memory_domains": [ 00:04:18.942 { 00:04:18.942 "dma_device_id": "system", 00:04:18.942 "dma_device_type": 1 00:04:18.942 }, 00:04:18.942 { 00:04:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.942 "dma_device_type": 2 00:04:18.942 } 00:04:18.942 ], 00:04:18.942 "driver_specific": { 00:04:18.942 "passthru": { 00:04:18.942 "name": "Passthru0", 00:04:18.942 "base_bdev_name": "Malloc2" 00:04:18.942 } 00:04:18.942 } 00:04:18.942 } 00:04:18.942 ]' 00:04:18.942 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.202 21:11:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.202 ************************************ 00:04:19.202 END TEST rpc_daemon_integrity 00:04:19.202 ************************************ 00:04:19.202 00:04:19.202 real 0m0.325s 00:04:19.202 user 0m0.173s 00:04:19.203 sys 0m0.046s 00:04:19.203 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.203 21:11:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.203 21:11:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:19.203 21:11:37 rpc -- rpc/rpc.sh@84 -- # killprocess 56853 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 56853 ']' 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@958 -- # kill -0 56853 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56853 00:04:19.203 killing process with pid 56853 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56853' 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@973 -- # kill 56853 00:04:19.203 21:11:37 rpc -- common/autotest_common.sh@978 -- # wait 56853 00:04:21.792 ************************************ 00:04:21.792 END TEST rpc 00:04:21.792 ************************************ 00:04:21.792 00:04:21.792 real 0m5.264s 00:04:21.792 user 0m5.754s 00:04:21.792 sys 0m0.912s 00:04:21.792 21:11:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.792 21:11:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.792 21:11:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.792 21:11:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.792 21:11:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.792 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:21.792 ************************************ 00:04:21.792 START TEST skip_rpc 00:04:21.792 ************************************ 00:04:21.792 21:11:39 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.792 * Looking for test storage... 00:04:21.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.792 21:11:39 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.792 21:11:39 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.792 21:11:39 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.792 21:11:39 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.792 21:11:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.052 21:11:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.052 --rc genhtml_branch_coverage=1 00:04:22.052 --rc genhtml_function_coverage=1 00:04:22.052 --rc genhtml_legend=1 00:04:22.052 --rc geninfo_all_blocks=1 00:04:22.052 --rc geninfo_unexecuted_blocks=1 00:04:22.052 00:04:22.052 ' 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.052 --rc genhtml_branch_coverage=1 00:04:22.052 --rc genhtml_function_coverage=1 00:04:22.052 --rc genhtml_legend=1 00:04:22.052 --rc geninfo_all_blocks=1 00:04:22.052 --rc geninfo_unexecuted_blocks=1 00:04:22.052 00:04:22.052 ' 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.052 --rc genhtml_branch_coverage=1 00:04:22.052 --rc genhtml_function_coverage=1 00:04:22.052 --rc genhtml_legend=1 00:04:22.052 --rc geninfo_all_blocks=1 00:04:22.052 --rc geninfo_unexecuted_blocks=1 00:04:22.052 00:04:22.052 ' 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.052 --rc genhtml_branch_coverage=1 00:04:22.052 --rc genhtml_function_coverage=1 00:04:22.052 --rc genhtml_legend=1 00:04:22.052 --rc geninfo_all_blocks=1 00:04:22.052 --rc geninfo_unexecuted_blocks=1 00:04:22.052 00:04:22.052 ' 00:04:22.052 21:11:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.052 21:11:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.052 21:11:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.052 21:11:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.052 ************************************ 00:04:22.052 START TEST skip_rpc 00:04:22.052 ************************************ 00:04:22.052 21:11:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:22.052 21:11:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57088 00:04:22.052 21:11:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:22.052 21:11:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.052 21:11:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:22.052 [2024-11-26 21:11:40.069622] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:22.052 [2024-11-26 21:11:40.069851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57088 ] 00:04:22.310 [2024-11-26 21:11:40.242695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.310 [2024-11-26 21:11:40.360979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57088 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57088 ']' 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57088 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.583 21:11:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57088 00:04:27.583 21:11:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.583 21:11:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.583 21:11:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57088' 00:04:27.583 killing process with pid 57088 00:04:27.583 21:11:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57088 00:04:27.583 21:11:45 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57088 00:04:29.492 00:04:29.492 real 0m7.390s 00:04:29.492 user 0m6.908s 00:04:29.492 sys 0m0.379s 00:04:29.492 ************************************ 00:04:29.492 END TEST skip_rpc 00:04:29.492 ************************************ 00:04:29.492 21:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.492 21:11:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.492 21:11:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.492 21:11:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.492 21:11:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.492 21:11:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.492 ************************************ 00:04:29.492 START TEST skip_rpc_with_json 00:04:29.492 ************************************ 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57192 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57192 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57192 ']' 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.492 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.493 21:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.493 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.493 21:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.493 [2024-11-26 21:11:47.516112] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:29.493 [2024-11-26 21:11:47.516243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57192 ] 00:04:29.752 [2024-11-26 21:11:47.689998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.752 [2024-11-26 21:11:47.812038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.691 [2024-11-26 21:11:48.678978] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.691 request: 00:04:30.691 { 00:04:30.691 "trtype": "tcp", 00:04:30.691 "method": "nvmf_get_transports", 00:04:30.691 "req_id": 1 00:04:30.691 } 00:04:30.691 Got JSON-RPC error response 00:04:30.691 response: 00:04:30.691 { 00:04:30.691 "code": -19, 00:04:30.691 "message": "No such device" 00:04:30.691 } 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.691 [2024-11-26 21:11:48.691120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.691 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.950 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.950 21:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.950 { 00:04:30.950 "subsystems": [ 00:04:30.950 { 00:04:30.950 "subsystem": "fsdev", 00:04:30.950 "config": [ 00:04:30.950 { 00:04:30.950 "method": "fsdev_set_opts", 00:04:30.950 "params": { 00:04:30.950 "fsdev_io_pool_size": 65535, 00:04:30.950 "fsdev_io_cache_size": 256 00:04:30.950 } 00:04:30.950 } 00:04:30.950 ] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "keyring", 00:04:30.950 "config": [] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "iobuf", 00:04:30.950 "config": [ 00:04:30.950 { 00:04:30.950 "method": "iobuf_set_options", 00:04:30.950 "params": { 00:04:30.950 "small_pool_count": 8192, 00:04:30.950 "large_pool_count": 1024, 00:04:30.950 "small_bufsize": 8192, 00:04:30.950 "large_bufsize": 135168, 00:04:30.950 "enable_numa": false 00:04:30.950 } 00:04:30.950 } 00:04:30.950 ] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "sock", 00:04:30.950 "config": [ 00:04:30.950 { 00:04:30.950 "method": "sock_set_default_impl", 00:04:30.950 "params": { 00:04:30.950 "impl_name": "posix" 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "sock_impl_set_options", 00:04:30.950 "params": { 00:04:30.950 "impl_name": "ssl", 00:04:30.950 "recv_buf_size": 4096, 00:04:30.950 "send_buf_size": 4096, 00:04:30.950 "enable_recv_pipe": true, 00:04:30.950 "enable_quickack": false, 00:04:30.950 "enable_placement_id": 0, 00:04:30.950 "enable_zerocopy_send_server": true, 00:04:30.950 "enable_zerocopy_send_client": false, 00:04:30.950 "zerocopy_threshold": 0, 00:04:30.950 "tls_version": 0, 00:04:30.950 "enable_ktls": false 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "sock_impl_set_options", 00:04:30.950 "params": { 00:04:30.950 "impl_name": "posix", 00:04:30.950 "recv_buf_size": 2097152, 00:04:30.950 "send_buf_size": 2097152, 00:04:30.950 "enable_recv_pipe": true, 00:04:30.950 "enable_quickack": false, 00:04:30.950 "enable_placement_id": 0, 00:04:30.950 "enable_zerocopy_send_server": true, 00:04:30.950 "enable_zerocopy_send_client": false, 00:04:30.950 "zerocopy_threshold": 0, 00:04:30.950 "tls_version": 0, 00:04:30.950 "enable_ktls": false 00:04:30.950 } 00:04:30.950 } 00:04:30.950 ] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "vmd", 00:04:30.950 "config": [] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "accel", 00:04:30.950 "config": [ 00:04:30.950 { 00:04:30.950 "method": "accel_set_options", 00:04:30.950 "params": { 00:04:30.950 "small_cache_size": 128, 00:04:30.950 "large_cache_size": 16, 00:04:30.950 "task_count": 2048, 00:04:30.950 "sequence_count": 2048, 00:04:30.950 "buf_count": 2048 00:04:30.950 } 00:04:30.950 } 00:04:30.950 ] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "bdev", 00:04:30.950 "config": [ 00:04:30.950 { 00:04:30.950 "method": "bdev_set_options", 00:04:30.950 "params": { 00:04:30.950 "bdev_io_pool_size": 65535, 00:04:30.950 "bdev_io_cache_size": 256, 00:04:30.950 "bdev_auto_examine": true, 00:04:30.950 "iobuf_small_cache_size": 128, 00:04:30.950 "iobuf_large_cache_size": 16 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "bdev_raid_set_options", 00:04:30.950 "params": { 00:04:30.950 "process_window_size_kb": 1024, 00:04:30.950 "process_max_bandwidth_mb_sec": 0 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "bdev_iscsi_set_options", 00:04:30.950 "params": { 00:04:30.950 "timeout_sec": 30 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "bdev_nvme_set_options", 00:04:30.950 "params": { 00:04:30.950 "action_on_timeout": "none", 00:04:30.950 "timeout_us": 0, 00:04:30.950 "timeout_admin_us": 0, 00:04:30.950 "keep_alive_timeout_ms": 10000, 00:04:30.950 "arbitration_burst": 0, 00:04:30.950 "low_priority_weight": 0, 00:04:30.950 "medium_priority_weight": 0, 00:04:30.950 "high_priority_weight": 0, 00:04:30.950 "nvme_adminq_poll_period_us": 10000, 00:04:30.950 "nvme_ioq_poll_period_us": 0, 00:04:30.950 "io_queue_requests": 0, 00:04:30.950 "delay_cmd_submit": true, 00:04:30.950 "transport_retry_count": 4, 00:04:30.950 "bdev_retry_count": 3, 00:04:30.950 "transport_ack_timeout": 0, 00:04:30.950 "ctrlr_loss_timeout_sec": 0, 00:04:30.950 "reconnect_delay_sec": 0, 00:04:30.950 "fast_io_fail_timeout_sec": 0, 00:04:30.950 "disable_auto_failback": false, 00:04:30.950 "generate_uuids": false, 00:04:30.950 "transport_tos": 0, 00:04:30.950 "nvme_error_stat": false, 00:04:30.950 "rdma_srq_size": 0, 00:04:30.950 "io_path_stat": false, 00:04:30.950 "allow_accel_sequence": false, 00:04:30.950 "rdma_max_cq_size": 0, 00:04:30.950 "rdma_cm_event_timeout_ms": 0, 00:04:30.950 "dhchap_digests": [ 00:04:30.950 "sha256", 00:04:30.950 "sha384", 00:04:30.950 "sha512" 00:04:30.950 ], 00:04:30.950 "dhchap_dhgroups": [ 00:04:30.950 "null", 00:04:30.950 "ffdhe2048", 00:04:30.950 "ffdhe3072", 00:04:30.950 "ffdhe4096", 00:04:30.950 "ffdhe6144", 00:04:30.950 "ffdhe8192" 00:04:30.950 ] 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "bdev_nvme_set_hotplug", 00:04:30.950 "params": { 00:04:30.950 "period_us": 100000, 00:04:30.950 "enable": false 00:04:30.950 } 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "method": "bdev_wait_for_examine" 00:04:30.950 } 00:04:30.950 ] 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "scsi", 00:04:30.950 "config": null 00:04:30.950 }, 00:04:30.950 { 00:04:30.950 "subsystem": "scheduler", 00:04:30.950 "config": [ 00:04:30.950 { 00:04:30.950 "method": "framework_set_scheduler", 00:04:30.950 "params": { 00:04:30.950 "name": "static" 00:04:30.951 } 00:04:30.951 } 00:04:30.951 ] 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "subsystem": "vhost_scsi", 00:04:30.951 "config": [] 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "subsystem": "vhost_blk", 00:04:30.951 "config": [] 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "subsystem": "ublk", 00:04:30.951 "config": [] 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "subsystem": "nbd", 00:04:30.951 "config": [] 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "subsystem": "nvmf", 00:04:30.951 "config": [ 00:04:30.951 { 00:04:30.951 "method": "nvmf_set_config", 00:04:30.951 "params": { 00:04:30.951 "discovery_filter": "match_any", 00:04:30.951 "admin_cmd_passthru": { 00:04:30.951 "identify_ctrlr": false 00:04:30.951 }, 00:04:30.951 "dhchap_digests": [ 00:04:30.951 "sha256", 00:04:30.951 "sha384", 00:04:30.951 "sha512" 00:04:30.951 ], 00:04:30.951 "dhchap_dhgroups": [ 00:04:30.951 "null", 00:04:30.951 "ffdhe2048", 00:04:30.951 "ffdhe3072", 00:04:30.951 "ffdhe4096", 00:04:30.951 "ffdhe6144", 00:04:30.951 "ffdhe8192" 00:04:30.951 ] 00:04:30.951 } 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "method": "nvmf_set_max_subsystems", 00:04:30.951 "params": { 00:04:30.951 "max_subsystems": 1024 00:04:30.951 } 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "method": "nvmf_set_crdt", 00:04:30.951 "params": { 00:04:30.951 "crdt1": 0, 00:04:30.951 "crdt2": 0, 00:04:30.951 "crdt3": 0 00:04:30.951 } 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "method": "nvmf_create_transport", 00:04:30.951 "params": { 00:04:30.951 "trtype": "TCP", 00:04:30.951 "max_queue_depth": 128, 00:04:30.951 "max_io_qpairs_per_ctrlr": 127, 00:04:30.951 "in_capsule_data_size": 4096, 00:04:30.951 "max_io_size": 131072, 00:04:30.951 "io_unit_size": 131072, 00:04:30.951 "max_aq_depth": 128, 00:04:30.951 "num_shared_buffers": 511, 00:04:30.951 "buf_cache_size": 4294967295, 00:04:30.951 "dif_insert_or_strip": false, 00:04:30.951 "zcopy": false, 00:04:30.951 "c2h_success": true, 00:04:30.951 "sock_priority": 0, 00:04:30.951 "abort_timeout_sec": 1, 00:04:30.951 "ack_timeout": 0, 00:04:30.951 "data_wr_pool_size": 0 00:04:30.951 } 00:04:30.951 } 00:04:30.951 ] 00:04:30.951 }, 00:04:30.951 { 00:04:30.951 "subsystem": "iscsi", 00:04:30.951 "config": [ 00:04:30.951 { 00:04:30.951 "method": "iscsi_set_options", 00:04:30.951 "params": { 00:04:30.951 "node_base": "iqn.2016-06.io.spdk", 00:04:30.951 "max_sessions": 128, 00:04:30.951 "max_connections_per_session": 2, 00:04:30.951 "max_queue_depth": 64, 00:04:30.951 "default_time2wait": 2, 00:04:30.951 "default_time2retain": 20, 00:04:30.951 "first_burst_length": 8192, 00:04:30.951 "immediate_data": true, 00:04:30.951 "allow_duplicated_isid": false, 00:04:30.951 "error_recovery_level": 0, 00:04:30.951 "nop_timeout": 60, 00:04:30.951 "nop_in_interval": 30, 00:04:30.951 "disable_chap": false, 00:04:30.951 "require_chap": false, 00:04:30.951 "mutual_chap": false, 00:04:30.951 "chap_group": 0, 00:04:30.951 "max_large_datain_per_connection": 64, 00:04:30.951 "max_r2t_per_connection": 4, 00:04:30.951 "pdu_pool_size": 36864, 00:04:30.951 "immediate_data_pool_size": 16384, 00:04:30.951 "data_out_pool_size": 2048 00:04:30.951 } 00:04:30.951 } 00:04:30.951 ] 00:04:30.951 } 00:04:30.951 ] 00:04:30.951 } 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57192 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57192 ']' 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57192 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57192 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57192' 00:04:30.951 killing process with pid 57192 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57192 00:04:30.951 21:11:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57192 00:04:33.487 21:11:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.487 21:11:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57248 00:04:33.487 21:11:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57248 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57248 ']' 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57248 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57248 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57248' 00:04:38.774 killing process with pid 57248 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57248 00:04:38.774 21:11:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57248 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.683 00:04:40.683 real 0m11.295s 00:04:40.683 user 0m10.755s 00:04:40.683 sys 0m0.851s 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.683 ************************************ 00:04:40.683 END TEST skip_rpc_with_json 00:04:40.683 ************************************ 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.683 21:11:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.683 21:11:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.683 21:11:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.683 21:11:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.683 ************************************ 00:04:40.683 START TEST skip_rpc_with_delay 00:04:40.683 ************************************ 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.683 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.943 [2024-11-26 21:11:58.878844] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.943 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.943 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.943 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.943 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.943 00:04:40.943 real 0m0.170s 00:04:40.943 user 0m0.092s 00:04:40.943 sys 0m0.076s 00:04:40.943 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.943 ************************************ 00:04:40.943 END TEST skip_rpc_with_delay 00:04:40.943 ************************************ 00:04:40.943 21:11:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.943 21:11:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.943 21:11:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.943 21:11:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.943 21:11:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.943 21:11:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.943 21:11:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.943 ************************************ 00:04:40.943 START TEST exit_on_failed_rpc_init 00:04:40.943 ************************************ 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57376 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57376 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57376 ']' 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.943 21:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.205 [2024-11-26 21:11:59.115915] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:41.205 [2024-11-26 21:11:59.116075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57376 ] 00:04:41.205 [2024-11-26 21:11:59.290882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.469 [2024-11-26 21:11:59.409057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.106 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.375 [2024-11-26 21:12:00.341844] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:42.375 [2024-11-26 21:12:00.341979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57394 ] 00:04:42.375 [2024-11-26 21:12:00.514923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.634 [2024-11-26 21:12:00.633521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.634 [2024-11-26 21:12:00.633607] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.634 [2024-11-26 21:12:00.633621] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.634 [2024-11-26 21:12:00.633632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57376 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57376 ']' 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57376 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57376 00:04:42.894 killing process with pid 57376 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57376' 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57376 00:04:42.894 21:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57376 00:04:45.432 ************************************ 00:04:45.432 END TEST exit_on_failed_rpc_init 00:04:45.432 ************************************ 00:04:45.432 00:04:45.432 real 0m4.301s 00:04:45.432 user 0m4.647s 00:04:45.432 sys 0m0.552s 00:04:45.432 21:12:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.432 21:12:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.432 21:12:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.432 00:04:45.432 real 0m23.644s 00:04:45.432 user 0m22.615s 00:04:45.432 sys 0m2.151s 00:04:45.432 ************************************ 00:04:45.432 END TEST skip_rpc 00:04:45.432 ************************************ 00:04:45.432 21:12:03 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.432 21:12:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.432 21:12:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.432 21:12:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.432 21:12:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.432 21:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:45.432 ************************************ 00:04:45.432 START TEST rpc_client 00:04:45.432 ************************************ 00:04:45.432 21:12:03 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.432 * Looking for test storage... 00:04:45.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:45.432 21:12:03 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.432 21:12:03 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.432 21:12:03 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.692 21:12:03 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.692 21:12:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.693 21:12:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.693 --rc genhtml_branch_coverage=1 00:04:45.693 --rc genhtml_function_coverage=1 00:04:45.693 --rc genhtml_legend=1 00:04:45.693 --rc geninfo_all_blocks=1 00:04:45.693 --rc geninfo_unexecuted_blocks=1 00:04:45.693 00:04:45.693 ' 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.693 --rc genhtml_branch_coverage=1 00:04:45.693 --rc genhtml_function_coverage=1 00:04:45.693 --rc genhtml_legend=1 00:04:45.693 --rc geninfo_all_blocks=1 00:04:45.693 --rc geninfo_unexecuted_blocks=1 00:04:45.693 00:04:45.693 ' 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.693 --rc genhtml_branch_coverage=1 00:04:45.693 --rc genhtml_function_coverage=1 00:04:45.693 --rc genhtml_legend=1 00:04:45.693 --rc geninfo_all_blocks=1 00:04:45.693 --rc geninfo_unexecuted_blocks=1 00:04:45.693 00:04:45.693 ' 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.693 --rc genhtml_branch_coverage=1 00:04:45.693 --rc genhtml_function_coverage=1 00:04:45.693 --rc genhtml_legend=1 00:04:45.693 --rc geninfo_all_blocks=1 00:04:45.693 --rc geninfo_unexecuted_blocks=1 00:04:45.693 00:04:45.693 ' 00:04:45.693 21:12:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:45.693 OK 00:04:45.693 21:12:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:45.693 00:04:45.693 real 0m0.309s 00:04:45.693 user 0m0.172s 00:04:45.693 sys 0m0.153s 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.693 21:12:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:45.693 ************************************ 00:04:45.693 END TEST rpc_client 00:04:45.693 ************************************ 00:04:45.693 21:12:03 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:45.693 21:12:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.693 21:12:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.693 21:12:03 -- common/autotest_common.sh@10 -- # set +x 00:04:45.693 ************************************ 00:04:45.693 START TEST json_config 00:04:45.693 ************************************ 00:04:45.693 21:12:03 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:45.954 21:12:03 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.954 21:12:03 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.954 21:12:03 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.954 21:12:03 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.954 21:12:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.954 21:12:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.954 21:12:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.954 21:12:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.954 21:12:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.954 21:12:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.954 21:12:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.954 21:12:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.954 21:12:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.954 21:12:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:45.954 21:12:03 json_config -- scripts/common.sh@345 -- # : 1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.954 21:12:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.954 21:12:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@353 -- # local d=1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.954 21:12:03 json_config -- scripts/common.sh@355 -- # echo 1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.954 21:12:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:45.954 21:12:03 json_config -- scripts/common.sh@353 -- # local d=2 00:04:45.954 21:12:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.954 21:12:04 json_config -- scripts/common.sh@355 -- # echo 2 00:04:45.954 21:12:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.954 21:12:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.954 21:12:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.954 21:12:04 json_config -- scripts/common.sh@368 -- # return 0 00:04:45.954 21:12:04 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.954 21:12:04 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.954 --rc genhtml_branch_coverage=1 00:04:45.954 --rc genhtml_function_coverage=1 00:04:45.954 --rc genhtml_legend=1 00:04:45.954 --rc geninfo_all_blocks=1 00:04:45.954 --rc geninfo_unexecuted_blocks=1 00:04:45.954 00:04:45.954 ' 00:04:45.954 21:12:04 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.954 --rc genhtml_branch_coverage=1 00:04:45.954 --rc genhtml_function_coverage=1 00:04:45.954 --rc genhtml_legend=1 00:04:45.954 --rc geninfo_all_blocks=1 00:04:45.954 --rc geninfo_unexecuted_blocks=1 00:04:45.954 00:04:45.954 ' 00:04:45.954 21:12:04 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.954 --rc genhtml_branch_coverage=1 00:04:45.954 --rc genhtml_function_coverage=1 00:04:45.954 --rc genhtml_legend=1 00:04:45.954 --rc geninfo_all_blocks=1 00:04:45.954 --rc geninfo_unexecuted_blocks=1 00:04:45.954 00:04:45.954 ' 00:04:45.954 21:12:04 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.954 --rc genhtml_branch_coverage=1 00:04:45.954 --rc genhtml_function_coverage=1 00:04:45.954 --rc genhtml_legend=1 00:04:45.954 --rc geninfo_all_blocks=1 00:04:45.954 --rc geninfo_unexecuted_blocks=1 00:04:45.954 00:04:45.954 ' 00:04:45.954 21:12:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eab190e8-05a3-4c07-ae74-fd1981a29539 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=eab190e8-05a3-4c07-ae74-fd1981a29539 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.954 21:12:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.954 21:12:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.954 21:12:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.954 21:12:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.954 21:12:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.954 21:12:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.954 21:12:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.954 21:12:04 json_config -- paths/export.sh@5 -- # export PATH 00:04:45.954 21:12:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@51 -- # : 0 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.954 21:12:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.955 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.955 21:12:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.955 21:12:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.955 21:12:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:45.955 WARNING: No tests are enabled so not running JSON configuration tests 00:04:45.955 21:12:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:45.955 00:04:45.955 real 0m0.242s 00:04:45.955 user 0m0.130s 00:04:45.955 sys 0m0.115s 00:04:45.955 21:12:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.955 21:12:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.955 ************************************ 00:04:45.955 END TEST json_config 00:04:45.955 ************************************ 00:04:46.216 21:12:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.216 21:12:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.216 21:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.216 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:04:46.216 ************************************ 00:04:46.216 START TEST json_config_extra_key 00:04:46.216 ************************************ 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.216 --rc genhtml_branch_coverage=1 00:04:46.216 --rc genhtml_function_coverage=1 00:04:46.216 --rc genhtml_legend=1 00:04:46.216 --rc geninfo_all_blocks=1 00:04:46.216 --rc geninfo_unexecuted_blocks=1 00:04:46.216 00:04:46.216 ' 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.216 --rc genhtml_branch_coverage=1 00:04:46.216 --rc genhtml_function_coverage=1 00:04:46.216 --rc genhtml_legend=1 00:04:46.216 --rc geninfo_all_blocks=1 00:04:46.216 --rc geninfo_unexecuted_blocks=1 00:04:46.216 00:04:46.216 ' 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.216 --rc genhtml_branch_coverage=1 00:04:46.216 --rc genhtml_function_coverage=1 00:04:46.216 --rc genhtml_legend=1 00:04:46.216 --rc geninfo_all_blocks=1 00:04:46.216 --rc geninfo_unexecuted_blocks=1 00:04:46.216 00:04:46.216 ' 00:04:46.216 21:12:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.216 --rc genhtml_branch_coverage=1 00:04:46.216 --rc genhtml_function_coverage=1 00:04:46.216 --rc genhtml_legend=1 00:04:46.216 --rc geninfo_all_blocks=1 00:04:46.216 --rc geninfo_unexecuted_blocks=1 00:04:46.216 00:04:46.216 ' 00:04:46.216 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eab190e8-05a3-4c07-ae74-fd1981a29539 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=eab190e8-05a3-4c07-ae74-fd1981a29539 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.216 21:12:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.216 21:12:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.216 21:12:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.216 21:12:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.216 21:12:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.216 21:12:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.216 21:12:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.217 21:12:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.217 INFO: launching applications... 00:04:46.217 21:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57604 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.217 Waiting for target to run... 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.217 21:12:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57604 /var/tmp/spdk_tgt.sock 00:04:46.217 21:12:04 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57604 ']' 00:04:46.217 21:12:04 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.217 21:12:04 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.217 21:12:04 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.217 21:12:04 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.217 21:12:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.477 [2024-11-26 21:12:04.446646] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:46.477 [2024-11-26 21:12:04.446864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57604 ] 00:04:46.736 [2024-11-26 21:12:04.820502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.997 [2024-11-26 21:12:04.929965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.567 21:12:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.567 21:12:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.567 00:04:47.567 21:12:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.567 INFO: shutting down applications... 00:04:47.567 21:12:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57604 ]] 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57604 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:47.567 21:12:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.137 21:12:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.137 21:12:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.137 21:12:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:48.137 21:12:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.706 21:12:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.706 21:12:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.706 21:12:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:48.706 21:12:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.276 21:12:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.276 21:12:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.276 21:12:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:49.276 21:12:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.845 21:12:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.845 21:12:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.845 21:12:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:49.845 21:12:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.104 21:12:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.104 21:12:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.104 21:12:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:50.104 21:12:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57604 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.672 SPDK target shutdown done 00:04:50.672 Success 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.672 21:12:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.672 21:12:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:50.672 00:04:50.672 real 0m4.588s 00:04:50.672 user 0m3.989s 00:04:50.672 sys 0m0.562s 00:04:50.672 21:12:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.672 21:12:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.672 ************************************ 00:04:50.672 END TEST json_config_extra_key 00:04:50.672 ************************************ 00:04:50.672 21:12:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.673 21:12:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.673 21:12:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.673 21:12:08 -- common/autotest_common.sh@10 -- # set +x 00:04:50.673 ************************************ 00:04:50.673 START TEST alias_rpc 00:04:50.673 ************************************ 00:04:50.673 21:12:08 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.940 * Looking for test storage... 00:04:50.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.940 21:12:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.940 --rc genhtml_branch_coverage=1 00:04:50.940 --rc genhtml_function_coverage=1 00:04:50.940 --rc genhtml_legend=1 00:04:50.940 --rc geninfo_all_blocks=1 00:04:50.940 --rc geninfo_unexecuted_blocks=1 00:04:50.940 00:04:50.940 ' 00:04:50.940 21:12:08 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.940 --rc genhtml_branch_coverage=1 00:04:50.940 --rc genhtml_function_coverage=1 00:04:50.940 --rc genhtml_legend=1 00:04:50.940 --rc geninfo_all_blocks=1 00:04:50.940 --rc geninfo_unexecuted_blocks=1 00:04:50.940 00:04:50.940 ' 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.940 --rc genhtml_branch_coverage=1 00:04:50.940 --rc genhtml_function_coverage=1 00:04:50.940 --rc genhtml_legend=1 00:04:50.940 --rc geninfo_all_blocks=1 00:04:50.940 --rc geninfo_unexecuted_blocks=1 00:04:50.940 00:04:50.940 ' 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.940 --rc genhtml_branch_coverage=1 00:04:50.940 --rc genhtml_function_coverage=1 00:04:50.940 --rc genhtml_legend=1 00:04:50.940 --rc geninfo_all_blocks=1 00:04:50.940 --rc geninfo_unexecuted_blocks=1 00:04:50.940 00:04:50.940 ' 00:04:50.940 21:12:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.940 21:12:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57716 00:04:50.940 21:12:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.940 21:12:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57716 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57716 ']' 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.940 21:12:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.243 [2024-11-26 21:12:09.103025] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:51.243 [2024-11-26 21:12:09.103161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57716 ] 00:04:51.243 [2024-11-26 21:12:09.262298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.243 [2024-11-26 21:12:09.387604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.179 21:12:10 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.179 21:12:10 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.179 21:12:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:52.438 21:12:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57716 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57716 ']' 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57716 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57716 00:04:52.439 killing process with pid 57716 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57716' 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 57716 00:04:52.439 21:12:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 57716 00:04:54.977 ************************************ 00:04:54.977 END TEST alias_rpc 00:04:54.977 ************************************ 00:04:54.977 00:04:54.977 real 0m4.066s 00:04:54.977 user 0m4.035s 00:04:54.977 sys 0m0.581s 00:04:54.977 21:12:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.977 21:12:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.977 21:12:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.977 21:12:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.977 21:12:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.977 21:12:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.977 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:04:54.977 ************************************ 00:04:54.977 START TEST spdkcli_tcp 00:04:54.977 ************************************ 00:04:54.977 21:12:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.977 * Looking for test storage... 00:04:54.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.977 21:12:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.977 --rc genhtml_branch_coverage=1 00:04:54.977 --rc genhtml_function_coverage=1 00:04:54.977 --rc genhtml_legend=1 00:04:54.977 --rc geninfo_all_blocks=1 00:04:54.977 --rc geninfo_unexecuted_blocks=1 00:04:54.977 00:04:54.977 ' 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.977 --rc genhtml_branch_coverage=1 00:04:54.977 --rc genhtml_function_coverage=1 00:04:54.977 --rc genhtml_legend=1 00:04:54.977 --rc geninfo_all_blocks=1 00:04:54.977 --rc geninfo_unexecuted_blocks=1 00:04:54.977 00:04:54.977 ' 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.977 --rc genhtml_branch_coverage=1 00:04:54.977 --rc genhtml_function_coverage=1 00:04:54.977 --rc genhtml_legend=1 00:04:54.977 --rc geninfo_all_blocks=1 00:04:54.977 --rc geninfo_unexecuted_blocks=1 00:04:54.977 00:04:54.977 ' 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.977 --rc genhtml_branch_coverage=1 00:04:54.977 --rc genhtml_function_coverage=1 00:04:54.977 --rc genhtml_legend=1 00:04:54.977 --rc geninfo_all_blocks=1 00:04:54.977 --rc geninfo_unexecuted_blocks=1 00:04:54.977 00:04:54.977 ' 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.977 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.977 21:12:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.236 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57823 00:04:55.236 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.236 21:12:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57823 00:04:55.236 21:12:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57823 ']' 00:04:55.236 21:12:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.236 21:12:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.236 21:12:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.236 21:12:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.236 21:12:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.236 [2024-11-26 21:12:13.231747] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:55.236 [2024-11-26 21:12:13.231953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57823 ] 00:04:55.495 [2024-11-26 21:12:13.406214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.495 [2024-11-26 21:12:13.518260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.495 [2024-11-26 21:12:13.518302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.434 21:12:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.434 21:12:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:56.434 21:12:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.434 21:12:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57845 00:04:56.434 21:12:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.434 [ 00:04:56.434 "bdev_malloc_delete", 00:04:56.434 "bdev_malloc_create", 00:04:56.434 "bdev_null_resize", 00:04:56.434 "bdev_null_delete", 00:04:56.434 "bdev_null_create", 00:04:56.434 "bdev_nvme_cuse_unregister", 00:04:56.434 "bdev_nvme_cuse_register", 00:04:56.434 "bdev_opal_new_user", 00:04:56.434 "bdev_opal_set_lock_state", 00:04:56.434 "bdev_opal_delete", 00:04:56.434 "bdev_opal_get_info", 00:04:56.434 "bdev_opal_create", 00:04:56.434 "bdev_nvme_opal_revert", 00:04:56.434 "bdev_nvme_opal_init", 00:04:56.434 "bdev_nvme_send_cmd", 00:04:56.434 "bdev_nvme_set_keys", 00:04:56.434 "bdev_nvme_get_path_iostat", 00:04:56.434 "bdev_nvme_get_mdns_discovery_info", 00:04:56.434 "bdev_nvme_stop_mdns_discovery", 00:04:56.434 "bdev_nvme_start_mdns_discovery", 00:04:56.434 "bdev_nvme_set_multipath_policy", 00:04:56.434 "bdev_nvme_set_preferred_path", 00:04:56.434 "bdev_nvme_get_io_paths", 00:04:56.434 "bdev_nvme_remove_error_injection", 00:04:56.434 "bdev_nvme_add_error_injection", 00:04:56.434 "bdev_nvme_get_discovery_info", 00:04:56.434 "bdev_nvme_stop_discovery", 00:04:56.434 "bdev_nvme_start_discovery", 00:04:56.434 "bdev_nvme_get_controller_health_info", 00:04:56.434 "bdev_nvme_disable_controller", 00:04:56.434 "bdev_nvme_enable_controller", 00:04:56.434 "bdev_nvme_reset_controller", 00:04:56.434 "bdev_nvme_get_transport_statistics", 00:04:56.434 "bdev_nvme_apply_firmware", 00:04:56.434 "bdev_nvme_detach_controller", 00:04:56.434 "bdev_nvme_get_controllers", 00:04:56.434 "bdev_nvme_attach_controller", 00:04:56.434 "bdev_nvme_set_hotplug", 00:04:56.434 "bdev_nvme_set_options", 00:04:56.434 "bdev_passthru_delete", 00:04:56.434 "bdev_passthru_create", 00:04:56.434 "bdev_lvol_set_parent_bdev", 00:04:56.434 "bdev_lvol_set_parent", 00:04:56.434 "bdev_lvol_check_shallow_copy", 00:04:56.434 "bdev_lvol_start_shallow_copy", 00:04:56.434 "bdev_lvol_grow_lvstore", 00:04:56.434 "bdev_lvol_get_lvols", 00:04:56.434 "bdev_lvol_get_lvstores", 00:04:56.434 "bdev_lvol_delete", 00:04:56.434 "bdev_lvol_set_read_only", 00:04:56.434 "bdev_lvol_resize", 00:04:56.434 "bdev_lvol_decouple_parent", 00:04:56.434 "bdev_lvol_inflate", 00:04:56.434 "bdev_lvol_rename", 00:04:56.434 "bdev_lvol_clone_bdev", 00:04:56.434 "bdev_lvol_clone", 00:04:56.434 "bdev_lvol_snapshot", 00:04:56.434 "bdev_lvol_create", 00:04:56.434 "bdev_lvol_delete_lvstore", 00:04:56.434 "bdev_lvol_rename_lvstore", 00:04:56.434 "bdev_lvol_create_lvstore", 00:04:56.434 "bdev_raid_set_options", 00:04:56.434 "bdev_raid_remove_base_bdev", 00:04:56.434 "bdev_raid_add_base_bdev", 00:04:56.434 "bdev_raid_delete", 00:04:56.434 "bdev_raid_create", 00:04:56.434 "bdev_raid_get_bdevs", 00:04:56.434 "bdev_error_inject_error", 00:04:56.434 "bdev_error_delete", 00:04:56.434 "bdev_error_create", 00:04:56.434 "bdev_split_delete", 00:04:56.434 "bdev_split_create", 00:04:56.434 "bdev_delay_delete", 00:04:56.434 "bdev_delay_create", 00:04:56.434 "bdev_delay_update_latency", 00:04:56.434 "bdev_zone_block_delete", 00:04:56.434 "bdev_zone_block_create", 00:04:56.434 "blobfs_create", 00:04:56.434 "blobfs_detect", 00:04:56.434 "blobfs_set_cache_size", 00:04:56.434 "bdev_aio_delete", 00:04:56.434 "bdev_aio_rescan", 00:04:56.434 "bdev_aio_create", 00:04:56.434 "bdev_ftl_set_property", 00:04:56.434 "bdev_ftl_get_properties", 00:04:56.434 "bdev_ftl_get_stats", 00:04:56.434 "bdev_ftl_unmap", 00:04:56.434 "bdev_ftl_unload", 00:04:56.434 "bdev_ftl_delete", 00:04:56.434 "bdev_ftl_load", 00:04:56.434 "bdev_ftl_create", 00:04:56.434 "bdev_virtio_attach_controller", 00:04:56.434 "bdev_virtio_scsi_get_devices", 00:04:56.434 "bdev_virtio_detach_controller", 00:04:56.434 "bdev_virtio_blk_set_hotplug", 00:04:56.434 "bdev_iscsi_delete", 00:04:56.434 "bdev_iscsi_create", 00:04:56.434 "bdev_iscsi_set_options", 00:04:56.434 "accel_error_inject_error", 00:04:56.434 "ioat_scan_accel_module", 00:04:56.434 "dsa_scan_accel_module", 00:04:56.434 "iaa_scan_accel_module", 00:04:56.434 "keyring_file_remove_key", 00:04:56.434 "keyring_file_add_key", 00:04:56.434 "keyring_linux_set_options", 00:04:56.434 "fsdev_aio_delete", 00:04:56.434 "fsdev_aio_create", 00:04:56.434 "iscsi_get_histogram", 00:04:56.434 "iscsi_enable_histogram", 00:04:56.434 "iscsi_set_options", 00:04:56.434 "iscsi_get_auth_groups", 00:04:56.434 "iscsi_auth_group_remove_secret", 00:04:56.434 "iscsi_auth_group_add_secret", 00:04:56.434 "iscsi_delete_auth_group", 00:04:56.434 "iscsi_create_auth_group", 00:04:56.434 "iscsi_set_discovery_auth", 00:04:56.434 "iscsi_get_options", 00:04:56.434 "iscsi_target_node_request_logout", 00:04:56.434 "iscsi_target_node_set_redirect", 00:04:56.434 "iscsi_target_node_set_auth", 00:04:56.434 "iscsi_target_node_add_lun", 00:04:56.434 "iscsi_get_stats", 00:04:56.434 "iscsi_get_connections", 00:04:56.434 "iscsi_portal_group_set_auth", 00:04:56.434 "iscsi_start_portal_group", 00:04:56.434 "iscsi_delete_portal_group", 00:04:56.434 "iscsi_create_portal_group", 00:04:56.434 "iscsi_get_portal_groups", 00:04:56.434 "iscsi_delete_target_node", 00:04:56.434 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.434 "iscsi_target_node_add_pg_ig_maps", 00:04:56.434 "iscsi_create_target_node", 00:04:56.434 "iscsi_get_target_nodes", 00:04:56.434 "iscsi_delete_initiator_group", 00:04:56.435 "iscsi_initiator_group_remove_initiators", 00:04:56.435 "iscsi_initiator_group_add_initiators", 00:04:56.435 "iscsi_create_initiator_group", 00:04:56.435 "iscsi_get_initiator_groups", 00:04:56.435 "nvmf_set_crdt", 00:04:56.435 "nvmf_set_config", 00:04:56.435 "nvmf_set_max_subsystems", 00:04:56.435 "nvmf_stop_mdns_prr", 00:04:56.435 "nvmf_publish_mdns_prr", 00:04:56.435 "nvmf_subsystem_get_listeners", 00:04:56.435 "nvmf_subsystem_get_qpairs", 00:04:56.435 "nvmf_subsystem_get_controllers", 00:04:56.435 "nvmf_get_stats", 00:04:56.435 "nvmf_get_transports", 00:04:56.435 "nvmf_create_transport", 00:04:56.435 "nvmf_get_targets", 00:04:56.435 "nvmf_delete_target", 00:04:56.435 "nvmf_create_target", 00:04:56.435 "nvmf_subsystem_allow_any_host", 00:04:56.435 "nvmf_subsystem_set_keys", 00:04:56.435 "nvmf_subsystem_remove_host", 00:04:56.435 "nvmf_subsystem_add_host", 00:04:56.435 "nvmf_ns_remove_host", 00:04:56.435 "nvmf_ns_add_host", 00:04:56.435 "nvmf_subsystem_remove_ns", 00:04:56.435 "nvmf_subsystem_set_ns_ana_group", 00:04:56.435 "nvmf_subsystem_add_ns", 00:04:56.435 "nvmf_subsystem_listener_set_ana_state", 00:04:56.435 "nvmf_discovery_get_referrals", 00:04:56.435 "nvmf_discovery_remove_referral", 00:04:56.435 "nvmf_discovery_add_referral", 00:04:56.435 "nvmf_subsystem_remove_listener", 00:04:56.435 "nvmf_subsystem_add_listener", 00:04:56.435 "nvmf_delete_subsystem", 00:04:56.435 "nvmf_create_subsystem", 00:04:56.435 "nvmf_get_subsystems", 00:04:56.435 "env_dpdk_get_mem_stats", 00:04:56.435 "nbd_get_disks", 00:04:56.435 "nbd_stop_disk", 00:04:56.435 "nbd_start_disk", 00:04:56.435 "ublk_recover_disk", 00:04:56.435 "ublk_get_disks", 00:04:56.435 "ublk_stop_disk", 00:04:56.435 "ublk_start_disk", 00:04:56.435 "ublk_destroy_target", 00:04:56.435 "ublk_create_target", 00:04:56.435 "virtio_blk_create_transport", 00:04:56.435 "virtio_blk_get_transports", 00:04:56.435 "vhost_controller_set_coalescing", 00:04:56.435 "vhost_get_controllers", 00:04:56.435 "vhost_delete_controller", 00:04:56.435 "vhost_create_blk_controller", 00:04:56.435 "vhost_scsi_controller_remove_target", 00:04:56.435 "vhost_scsi_controller_add_target", 00:04:56.435 "vhost_start_scsi_controller", 00:04:56.435 "vhost_create_scsi_controller", 00:04:56.435 "thread_set_cpumask", 00:04:56.435 "scheduler_set_options", 00:04:56.435 "framework_get_governor", 00:04:56.435 "framework_get_scheduler", 00:04:56.435 "framework_set_scheduler", 00:04:56.435 "framework_get_reactors", 00:04:56.435 "thread_get_io_channels", 00:04:56.435 "thread_get_pollers", 00:04:56.435 "thread_get_stats", 00:04:56.435 "framework_monitor_context_switch", 00:04:56.435 "spdk_kill_instance", 00:04:56.435 "log_enable_timestamps", 00:04:56.435 "log_get_flags", 00:04:56.435 "log_clear_flag", 00:04:56.435 "log_set_flag", 00:04:56.435 "log_get_level", 00:04:56.435 "log_set_level", 00:04:56.435 "log_get_print_level", 00:04:56.435 "log_set_print_level", 00:04:56.435 "framework_enable_cpumask_locks", 00:04:56.435 "framework_disable_cpumask_locks", 00:04:56.435 "framework_wait_init", 00:04:56.435 "framework_start_init", 00:04:56.435 "scsi_get_devices", 00:04:56.435 "bdev_get_histogram", 00:04:56.435 "bdev_enable_histogram", 00:04:56.435 "bdev_set_qos_limit", 00:04:56.435 "bdev_set_qd_sampling_period", 00:04:56.435 "bdev_get_bdevs", 00:04:56.435 "bdev_reset_iostat", 00:04:56.435 "bdev_get_iostat", 00:04:56.435 "bdev_examine", 00:04:56.435 "bdev_wait_for_examine", 00:04:56.435 "bdev_set_options", 00:04:56.435 "accel_get_stats", 00:04:56.435 "accel_set_options", 00:04:56.435 "accel_set_driver", 00:04:56.435 "accel_crypto_key_destroy", 00:04:56.435 "accel_crypto_keys_get", 00:04:56.435 "accel_crypto_key_create", 00:04:56.435 "accel_assign_opc", 00:04:56.435 "accel_get_module_info", 00:04:56.435 "accel_get_opc_assignments", 00:04:56.435 "vmd_rescan", 00:04:56.435 "vmd_remove_device", 00:04:56.435 "vmd_enable", 00:04:56.435 "sock_get_default_impl", 00:04:56.435 "sock_set_default_impl", 00:04:56.435 "sock_impl_set_options", 00:04:56.435 "sock_impl_get_options", 00:04:56.435 "iobuf_get_stats", 00:04:56.435 "iobuf_set_options", 00:04:56.435 "keyring_get_keys", 00:04:56.435 "framework_get_pci_devices", 00:04:56.435 "framework_get_config", 00:04:56.435 "framework_get_subsystems", 00:04:56.435 "fsdev_set_opts", 00:04:56.435 "fsdev_get_opts", 00:04:56.435 "trace_get_info", 00:04:56.435 "trace_get_tpoint_group_mask", 00:04:56.435 "trace_disable_tpoint_group", 00:04:56.435 "trace_enable_tpoint_group", 00:04:56.435 "trace_clear_tpoint_mask", 00:04:56.435 "trace_set_tpoint_mask", 00:04:56.435 "notify_get_notifications", 00:04:56.435 "notify_get_types", 00:04:56.435 "spdk_get_version", 00:04:56.435 "rpc_get_methods" 00:04:56.435 ] 00:04:56.435 21:12:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.435 21:12:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.435 21:12:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.695 21:12:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.695 21:12:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57823 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57823 ']' 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57823 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57823 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57823' 00:04:56.695 killing process with pid 57823 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57823 00:04:56.695 21:12:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57823 00:04:59.229 00:04:59.229 real 0m4.097s 00:04:59.229 user 0m7.240s 00:04:59.229 sys 0m0.636s 00:04:59.229 21:12:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.229 21:12:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.229 ************************************ 00:04:59.229 END TEST spdkcli_tcp 00:04:59.229 ************************************ 00:04:59.229 21:12:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.229 21:12:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.229 21:12:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.229 21:12:17 -- common/autotest_common.sh@10 -- # set +x 00:04:59.229 ************************************ 00:04:59.229 START TEST dpdk_mem_utility 00:04:59.229 ************************************ 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.229 * Looking for test storage... 00:04:59.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.229 21:12:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.229 --rc genhtml_branch_coverage=1 00:04:59.229 --rc genhtml_function_coverage=1 00:04:59.229 --rc genhtml_legend=1 00:04:59.229 --rc geninfo_all_blocks=1 00:04:59.229 --rc geninfo_unexecuted_blocks=1 00:04:59.229 00:04:59.229 ' 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.229 --rc genhtml_branch_coverage=1 00:04:59.229 --rc genhtml_function_coverage=1 00:04:59.229 --rc genhtml_legend=1 00:04:59.229 --rc geninfo_all_blocks=1 00:04:59.229 --rc geninfo_unexecuted_blocks=1 00:04:59.229 00:04:59.229 ' 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.229 --rc genhtml_branch_coverage=1 00:04:59.229 --rc genhtml_function_coverage=1 00:04:59.229 --rc genhtml_legend=1 00:04:59.229 --rc geninfo_all_blocks=1 00:04:59.229 --rc geninfo_unexecuted_blocks=1 00:04:59.229 00:04:59.229 ' 00:04:59.229 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.229 --rc genhtml_branch_coverage=1 00:04:59.229 --rc genhtml_function_coverage=1 00:04:59.229 --rc genhtml_legend=1 00:04:59.229 --rc geninfo_all_blocks=1 00:04:59.229 --rc geninfo_unexecuted_blocks=1 00:04:59.229 00:04:59.229 ' 00:04:59.229 21:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.229 21:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57945 00:04:59.229 21:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.229 21:12:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57945 00:04:59.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.230 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57945 ']' 00:04:59.230 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.230 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.230 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.230 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.230 21:12:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.488 [2024-11-26 21:12:17.385297] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:04:59.488 [2024-11-26 21:12:17.385506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 00:04:59.488 [2024-11-26 21:12:17.546494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.747 [2024-11-26 21:12:17.652364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.318 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.318 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:00.318 21:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.318 21:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.318 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.318 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.318 { 00:05:00.318 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.318 } 00:05:00.318 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.318 21:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.579 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:00.579 1 heaps totaling size 824.000000 MiB 00:05:00.579 size: 824.000000 MiB heap id: 0 00:05:00.579 end heaps---------- 00:05:00.579 9 mempools totaling size 603.782043 MiB 00:05:00.579 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.579 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.579 size: 100.555481 MiB name: bdev_io_57945 00:05:00.579 size: 50.003479 MiB name: msgpool_57945 00:05:00.579 size: 36.509338 MiB name: fsdev_io_57945 00:05:00.579 size: 21.763794 MiB name: PDU_Pool 00:05:00.579 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.579 size: 4.133484 MiB name: evtpool_57945 00:05:00.579 size: 0.026123 MiB name: Session_Pool 00:05:00.579 end mempools------- 00:05:00.579 6 memzones totaling size 4.142822 MiB 00:05:00.579 size: 1.000366 MiB name: RG_ring_0_57945 00:05:00.579 size: 1.000366 MiB name: RG_ring_1_57945 00:05:00.579 size: 1.000366 MiB name: RG_ring_4_57945 00:05:00.579 size: 1.000366 MiB name: RG_ring_5_57945 00:05:00.579 size: 0.125366 MiB name: RG_ring_2_57945 00:05:00.579 size: 0.015991 MiB name: RG_ring_3_57945 00:05:00.579 end memzones------- 00:05:00.579 21:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.579 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:05:00.579 list of free elements. size: 16.779419 MiB 00:05:00.579 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:00.579 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:00.579 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:00.579 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:00.579 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:00.579 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:00.579 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:00.579 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:00.579 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:00.579 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:00.579 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:00.579 element at address: 0x20001b400000 with size: 0.560974 MiB 00:05:00.579 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:00.579 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:00.579 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:00.579 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:00.579 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:00.579 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:00.579 list of standard malloc elements. size: 199.289673 MiB 00:05:00.579 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:00.579 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:00.579 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:00.579 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:00.579 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:00.579 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:00.579 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:00.579 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:00.579 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:00.579 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:00.579 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:00.579 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:00.579 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:00.579 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:00.580 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:00.581 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:00.581 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:00.581 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:00.581 list of memzone associated elements. size: 607.930908 MiB 00:05:00.581 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:00.581 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.581 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:00.581 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.581 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:00.581 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57945_0 00:05:00.581 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:00.581 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57945_0 00:05:00.581 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:00.581 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57945_0 00:05:00.581 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:00.581 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.581 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:00.581 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.581 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:00.581 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57945_0 00:05:00.581 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:00.581 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57945 00:05:00.581 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:00.581 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57945 00:05:00.581 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:00.581 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.581 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:00.581 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.581 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:00.581 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.581 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:00.581 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.581 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:00.581 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57945 00:05:00.581 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:00.581 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57945 00:05:00.581 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:00.581 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57945 00:05:00.581 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:00.581 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57945 00:05:00.581 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:00.581 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57945 00:05:00.581 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:00.581 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57945 00:05:00.581 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:00.581 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.581 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:00.581 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.581 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:00.581 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.581 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:00.581 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57945 00:05:00.581 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:00.581 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57945 00:05:00.581 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:00.581 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.581 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:00.581 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.581 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:00.581 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57945 00:05:00.581 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:00.581 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.581 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:00.581 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57945 00:05:00.581 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:00.581 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57945 00:05:00.582 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:00.582 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57945 00:05:00.582 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:00.582 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.582 21:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.582 21:12:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57945 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57945 ']' 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57945 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57945 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57945' 00:05:00.582 killing process with pid 57945 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57945 00:05:00.582 21:12:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57945 00:05:03.123 00:05:03.123 real 0m3.837s 00:05:03.123 user 0m3.765s 00:05:03.123 sys 0m0.513s 00:05:03.123 21:12:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.123 21:12:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.123 ************************************ 00:05:03.123 END TEST dpdk_mem_utility 00:05:03.123 ************************************ 00:05:03.123 21:12:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.123 21:12:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.123 21:12:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.123 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:03.123 ************************************ 00:05:03.123 START TEST event 00:05:03.123 ************************************ 00:05:03.123 21:12:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.123 * Looking for test storage... 00:05:03.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.123 21:12:21 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.123 21:12:21 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.123 21:12:21 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.123 21:12:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.123 21:12:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.123 21:12:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.123 21:12:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.123 21:12:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.123 21:12:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.123 21:12:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.123 21:12:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.123 21:12:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.123 21:12:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.123 21:12:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.123 21:12:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.123 21:12:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.123 21:12:21 event -- scripts/common.sh@345 -- # : 1 00:05:03.124 21:12:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.124 21:12:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.124 21:12:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.124 21:12:21 event -- scripts/common.sh@353 -- # local d=1 00:05:03.124 21:12:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.124 21:12:21 event -- scripts/common.sh@355 -- # echo 1 00:05:03.124 21:12:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.124 21:12:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.124 21:12:21 event -- scripts/common.sh@353 -- # local d=2 00:05:03.124 21:12:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.124 21:12:21 event -- scripts/common.sh@355 -- # echo 2 00:05:03.124 21:12:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.124 21:12:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.124 21:12:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.124 21:12:21 event -- scripts/common.sh@368 -- # return 0 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.124 --rc genhtml_branch_coverage=1 00:05:03.124 --rc genhtml_function_coverage=1 00:05:03.124 --rc genhtml_legend=1 00:05:03.124 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.124 --rc genhtml_branch_coverage=1 00:05:03.124 --rc genhtml_function_coverage=1 00:05:03.124 --rc genhtml_legend=1 00:05:03.124 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.124 --rc genhtml_branch_coverage=1 00:05:03.124 --rc genhtml_function_coverage=1 00:05:03.124 --rc genhtml_legend=1 00:05:03.124 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.124 --rc genhtml_branch_coverage=1 00:05:03.124 --rc genhtml_function_coverage=1 00:05:03.124 --rc genhtml_legend=1 00:05:03.124 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 21:12:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:03.124 21:12:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.124 21:12:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.124 21:12:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.124 21:12:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.124 ************************************ 00:05:03.124 START TEST event_perf 00:05:03.124 ************************************ 00:05:03.124 21:12:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.124 Running I/O for 1 seconds...[2024-11-26 21:12:21.248481] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:03.124 [2024-11-26 21:12:21.248623] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58053 ] 00:05:03.383 [2024-11-26 21:12:21.424442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.383 [2024-11-26 21:12:21.533285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.383 [2024-11-26 21:12:21.533494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.383 [2024-11-26 21:12:21.533625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.383 Running I/O for 1 seconds...[2024-11-26 21:12:21.533666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.762 00:05:04.762 lcore 0: 215454 00:05:04.762 lcore 1: 215454 00:05:04.762 lcore 2: 215454 00:05:04.762 lcore 3: 215452 00:05:04.762 done. 00:05:04.762 00:05:04.762 real 0m1.566s 00:05:04.762 user 0m4.329s 00:05:04.762 sys 0m0.119s 00:05:04.762 21:12:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.762 21:12:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 ************************************ 00:05:04.762 END TEST event_perf 00:05:04.762 ************************************ 00:05:04.762 21:12:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.762 21:12:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.762 21:12:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.762 21:12:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 ************************************ 00:05:04.762 START TEST event_reactor 00:05:04.762 ************************************ 00:05:04.762 21:12:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.762 [2024-11-26 21:12:22.875233] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:04.763 [2024-11-26 21:12:22.875341] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58092 ] 00:05:05.021 [2024-11-26 21:12:23.044128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.021 [2024-11-26 21:12:23.153942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.401 test_start 00:05:06.401 oneshot 00:05:06.401 tick 100 00:05:06.401 tick 100 00:05:06.401 tick 250 00:05:06.401 tick 100 00:05:06.401 tick 100 00:05:06.401 tick 100 00:05:06.401 tick 250 00:05:06.401 tick 500 00:05:06.401 tick 100 00:05:06.401 tick 100 00:05:06.401 tick 250 00:05:06.401 tick 100 00:05:06.401 tick 100 00:05:06.401 test_end 00:05:06.401 00:05:06.401 real 0m1.536s 00:05:06.401 user 0m1.337s 00:05:06.401 sys 0m0.091s 00:05:06.401 21:12:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.401 21:12:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.401 ************************************ 00:05:06.401 END TEST event_reactor 00:05:06.401 ************************************ 00:05:06.401 21:12:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.401 21:12:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.401 21:12:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.401 21:12:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.401 ************************************ 00:05:06.401 START TEST event_reactor_perf 00:05:06.401 ************************************ 00:05:06.401 21:12:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.401 [2024-11-26 21:12:24.473909] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:06.401 [2024-11-26 21:12:24.474029] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58129 ] 00:05:06.660 [2024-11-26 21:12:24.648644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.660 [2024-11-26 21:12:24.755146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.043 test_start 00:05:08.043 test_end 00:05:08.043 Performance: 410115 events per second 00:05:08.043 00:05:08.043 real 0m1.542s 00:05:08.043 user 0m1.342s 00:05:08.043 sys 0m0.092s 00:05:08.043 21:12:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.043 21:12:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.043 ************************************ 00:05:08.043 END TEST event_reactor_perf 00:05:08.043 ************************************ 00:05:08.043 21:12:26 event -- event/event.sh@49 -- # uname -s 00:05:08.043 21:12:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.043 21:12:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.043 21:12:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.043 21:12:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.043 21:12:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.043 ************************************ 00:05:08.043 START TEST event_scheduler 00:05:08.043 ************************************ 00:05:08.043 21:12:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.043 * Looking for test storage... 00:05:08.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.043 21:12:26 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.043 21:12:26 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.043 21:12:26 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.304 21:12:26 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.305 21:12:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.305 --rc genhtml_branch_coverage=1 00:05:08.305 --rc genhtml_function_coverage=1 00:05:08.305 --rc genhtml_legend=1 00:05:08.305 --rc geninfo_all_blocks=1 00:05:08.305 --rc geninfo_unexecuted_blocks=1 00:05:08.305 00:05:08.305 ' 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.305 --rc genhtml_branch_coverage=1 00:05:08.305 --rc genhtml_function_coverage=1 00:05:08.305 --rc genhtml_legend=1 00:05:08.305 --rc geninfo_all_blocks=1 00:05:08.305 --rc geninfo_unexecuted_blocks=1 00:05:08.305 00:05:08.305 ' 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.305 --rc genhtml_branch_coverage=1 00:05:08.305 --rc genhtml_function_coverage=1 00:05:08.305 --rc genhtml_legend=1 00:05:08.305 --rc geninfo_all_blocks=1 00:05:08.305 --rc geninfo_unexecuted_blocks=1 00:05:08.305 00:05:08.305 ' 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.305 --rc genhtml_branch_coverage=1 00:05:08.305 --rc genhtml_function_coverage=1 00:05:08.305 --rc genhtml_legend=1 00:05:08.305 --rc geninfo_all_blocks=1 00:05:08.305 --rc geninfo_unexecuted_blocks=1 00:05:08.305 00:05:08.305 ' 00:05:08.305 21:12:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.305 21:12:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58205 00:05:08.305 21:12:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.305 21:12:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.305 21:12:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58205 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58205 ']' 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.305 21:12:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.305 [2024-11-26 21:12:26.350375] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:08.305 [2024-11-26 21:12:26.350526] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:05:08.565 [2024-11-26 21:12:26.523533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.565 [2024-11-26 21:12:26.636930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.565 [2024-11-26 21:12:26.637115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.565 [2024-11-26 21:12:26.637294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.565 [2024-11-26 21:12:26.637330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.134 21:12:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.134 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.134 POWER: Cannot set governor of lcore 0 to performance 00:05:09.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.134 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.134 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.134 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.134 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.134 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.134 [2024-11-26 21:12:27.165705] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.134 [2024-11-26 21:12:27.165725] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:09.134 [2024-11-26 21:12:27.165735] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.134 [2024-11-26 21:12:27.165753] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.134 [2024-11-26 21:12:27.165761] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.134 [2024-11-26 21:12:27.165770] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.134 21:12:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.134 21:12:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 [2024-11-26 21:12:27.480065] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.393 21:12:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.393 21:12:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.393 21:12:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.393 21:12:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.393 21:12:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 ************************************ 00:05:09.393 START TEST scheduler_create_thread 00:05:09.393 ************************************ 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 2 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 3 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 4 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 5 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.393 6 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.393 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.394 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 7 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 8 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 9 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 10 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.652 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.653 21:12:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.591 21:12:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.591 21:12:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:10.591 21:12:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:10.591 21:12:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.591 21:12:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.529 21:12:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.529 00:05:11.529 real 0m2.140s 00:05:11.529 user 0m0.025s 00:05:11.529 sys 0m0.011s 00:05:11.529 21:12:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.529 21:12:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.529 ************************************ 00:05:11.529 END TEST scheduler_create_thread 00:05:11.529 ************************************ 00:05:11.789 21:12:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:11.789 21:12:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58205 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58205 ']' 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58205 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58205 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58205' 00:05:11.789 killing process with pid 58205 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58205 00:05:11.789 21:12:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58205 00:05:12.048 [2024-11-26 21:12:30.111974] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.429 00:05:13.429 real 0m5.208s 00:05:13.429 user 0m8.617s 00:05:13.429 sys 0m0.485s 00:05:13.429 21:12:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.429 21:12:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.429 ************************************ 00:05:13.429 END TEST event_scheduler 00:05:13.429 ************************************ 00:05:13.429 21:12:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.429 21:12:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.429 21:12:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.429 21:12:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.429 21:12:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.429 ************************************ 00:05:13.429 START TEST app_repeat 00:05:13.429 ************************************ 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58311 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.429 Process app_repeat pid: 58311 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58311' 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.429 spdk_app_start Round 0 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.429 21:12:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58311 /var/tmp/spdk-nbd.sock 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58311 ']' 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.429 21:12:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.429 [2024-11-26 21:12:31.386192] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:13.429 [2024-11-26 21:12:31.386310] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58311 ] 00:05:13.429 [2024-11-26 21:12:31.552375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.690 [2024-11-26 21:12:31.657933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.690 [2024-11-26 21:12:31.658000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.259 21:12:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.259 21:12:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.259 21:12:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.536 Malloc0 00:05:14.536 21:12:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.799 Malloc1 00:05:14.799 21:12:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.799 21:12:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.799 /dev/nbd0 00:05:15.059 21:12:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.059 21:12:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.059 1+0 records in 00:05:15.059 1+0 records out 00:05:15.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335746 s, 12.2 MB/s 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.059 21:12:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.059 21:12:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.059 21:12:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.059 21:12:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.059 /dev/nbd1 00:05:15.059 21:12:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.059 21:12:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.059 21:12:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.059 21:12:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.059 21:12:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.059 21:12:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.319 1+0 records in 00:05:15.319 1+0 records out 00:05:15.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381044 s, 10.7 MB/s 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.319 21:12:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.319 { 00:05:15.319 "nbd_device": "/dev/nbd0", 00:05:15.319 "bdev_name": "Malloc0" 00:05:15.319 }, 00:05:15.319 { 00:05:15.319 "nbd_device": "/dev/nbd1", 00:05:15.319 "bdev_name": "Malloc1" 00:05:15.319 } 00:05:15.319 ]' 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.319 { 00:05:15.319 "nbd_device": "/dev/nbd0", 00:05:15.319 "bdev_name": "Malloc0" 00:05:15.319 }, 00:05:15.319 { 00:05:15.319 "nbd_device": "/dev/nbd1", 00:05:15.319 "bdev_name": "Malloc1" 00:05:15.319 } 00:05:15.319 ]' 00:05:15.319 21:12:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.580 /dev/nbd1' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.580 /dev/nbd1' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.580 256+0 records in 00:05:15.580 256+0 records out 00:05:15.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0083956 s, 125 MB/s 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.580 256+0 records in 00:05:15.580 256+0 records out 00:05:15.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204268 s, 51.3 MB/s 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.580 256+0 records in 00:05:15.580 256+0 records out 00:05:15.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206436 s, 50.8 MB/s 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.580 21:12:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.840 21:12:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.100 21:12:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.100 21:12:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.100 21:12:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.100 21:12:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.100 21:12:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.100 21:12:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.100 21:12:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.100 21:12:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.100 21:12:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.100 21:12:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.100 21:12:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.360 21:12:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.360 21:12:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.360 21:12:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.360 21:12:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.360 21:12:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.360 21:12:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.360 21:12:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.619 21:12:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.001 [2024-11-26 21:12:35.772714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.001 [2024-11-26 21:12:35.879047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.001 [2024-11-26 21:12:35.879053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.001 [2024-11-26 21:12:36.069781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.001 [2024-11-26 21:12:36.069881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.942 21:12:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.942 21:12:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.942 spdk_app_start Round 1 00:05:19.942 21:12:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58311 /var/tmp/spdk-nbd.sock 00:05:19.942 21:12:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58311 ']' 00:05:19.942 21:12:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.942 21:12:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.943 21:12:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.943 21:12:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.943 21:12:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.943 21:12:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.943 21:12:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.943 21:12:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.203 Malloc0 00:05:20.203 21:12:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.463 Malloc1 00:05:20.463 21:12:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.463 21:12:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.463 /dev/nbd0 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.722 1+0 records in 00:05:20.722 1+0 records out 00:05:20.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299434 s, 13.7 MB/s 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.722 21:12:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.722 /dev/nbd1 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.722 21:12:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.723 1+0 records in 00:05:20.723 1+0 records out 00:05:20.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032396 s, 12.6 MB/s 00:05:20.723 21:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.983 21:12:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.983 21:12:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.983 21:12:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.983 21:12:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.983 21:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.983 21:12:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.983 21:12:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.983 21:12:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.983 21:12:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.983 21:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.983 { 00:05:20.983 "nbd_device": "/dev/nbd0", 00:05:20.983 "bdev_name": "Malloc0" 00:05:20.983 }, 00:05:20.983 { 00:05:20.983 "nbd_device": "/dev/nbd1", 00:05:20.983 "bdev_name": "Malloc1" 00:05:20.983 } 00:05:20.983 ]' 00:05:20.983 21:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.983 { 00:05:20.983 "nbd_device": "/dev/nbd0", 00:05:20.983 "bdev_name": "Malloc0" 00:05:20.983 }, 00:05:20.983 { 00:05:20.983 "nbd_device": "/dev/nbd1", 00:05:20.983 "bdev_name": "Malloc1" 00:05:20.983 } 00:05:20.983 ]' 00:05:20.983 21:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.243 /dev/nbd1' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.243 /dev/nbd1' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.243 256+0 records in 00:05:21.243 256+0 records out 00:05:21.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776977 s, 135 MB/s 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.243 256+0 records in 00:05:21.243 256+0 records out 00:05:21.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212722 s, 49.3 MB/s 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.243 256+0 records in 00:05:21.243 256+0 records out 00:05:21.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243663 s, 43.0 MB/s 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.243 21:12:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.503 21:12:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.762 21:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.763 21:12:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.763 21:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.763 21:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.022 21:12:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.022 21:12:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.282 21:12:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.663 [2024-11-26 21:12:41.437750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.663 [2024-11-26 21:12:41.540398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.663 [2024-11-26 21:12:41.540434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.663 [2024-11-26 21:12:41.726630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.663 [2024-11-26 21:12:41.726718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.572 21:12:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.572 21:12:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.572 spdk_app_start Round 2 00:05:25.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.572 21:12:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58311 /var/tmp/spdk-nbd.sock 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58311 ']' 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.572 21:12:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.572 21:12:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.831 Malloc0 00:05:25.831 21:12:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.091 Malloc1 00:05:26.091 21:12:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.091 21:12:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.350 /dev/nbd0 00:05:26.350 21:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.350 21:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.350 1+0 records in 00:05:26.350 1+0 records out 00:05:26.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402561 s, 10.2 MB/s 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.350 21:12:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.351 21:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.351 21:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.351 21:12:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.610 /dev/nbd1 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.610 1+0 records in 00:05:26.610 1+0 records out 00:05:26.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386493 s, 10.6 MB/s 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.610 21:12:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.610 21:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.870 { 00:05:26.870 "nbd_device": "/dev/nbd0", 00:05:26.870 "bdev_name": "Malloc0" 00:05:26.870 }, 00:05:26.870 { 00:05:26.870 "nbd_device": "/dev/nbd1", 00:05:26.870 "bdev_name": "Malloc1" 00:05:26.870 } 00:05:26.870 ]' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.870 { 00:05:26.870 "nbd_device": "/dev/nbd0", 00:05:26.870 "bdev_name": "Malloc0" 00:05:26.870 }, 00:05:26.870 { 00:05:26.870 "nbd_device": "/dev/nbd1", 00:05:26.870 "bdev_name": "Malloc1" 00:05:26.870 } 00:05:26.870 ]' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.870 /dev/nbd1' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.870 /dev/nbd1' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.870 256+0 records in 00:05:26.870 256+0 records out 00:05:26.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138522 s, 75.7 MB/s 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.870 256+0 records in 00:05:26.870 256+0 records out 00:05:26.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242594 s, 43.2 MB/s 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.870 256+0 records in 00:05:26.870 256+0 records out 00:05:26.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256635 s, 40.9 MB/s 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.870 21:12:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.130 21:12:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.389 21:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.649 21:12:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.649 21:12:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.908 21:12:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.290 [2024-11-26 21:12:47.134775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.290 [2024-11-26 21:12:47.241572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.290 [2024-11-26 21:12:47.241577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.290 [2024-11-26 21:12:47.429568] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.290 [2024-11-26 21:12:47.429642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.201 21:12:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58311 /var/tmp/spdk-nbd.sock 00:05:31.201 21:12:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58311 ']' 00:05:31.201 21:12:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.202 21:12:49 event.app_repeat -- event/event.sh@39 -- # killprocess 58311 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58311 ']' 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58311 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58311 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58311' 00:05:31.202 killing process with pid 58311 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58311 00:05:31.202 21:12:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58311 00:05:32.143 spdk_app_start is called in Round 0. 00:05:32.143 Shutdown signal received, stop current app iteration 00:05:32.143 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:05:32.143 spdk_app_start is called in Round 1. 00:05:32.143 Shutdown signal received, stop current app iteration 00:05:32.143 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:05:32.143 spdk_app_start is called in Round 2. 00:05:32.143 Shutdown signal received, stop current app iteration 00:05:32.143 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:05:32.143 spdk_app_start is called in Round 3. 00:05:32.143 Shutdown signal received, stop current app iteration 00:05:32.143 21:12:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.143 21:12:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:32.143 00:05:32.143 real 0m18.976s 00:05:32.143 user 0m40.678s 00:05:32.143 sys 0m2.612s 00:05:32.143 21:12:50 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.143 21:12:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.143 ************************************ 00:05:32.143 END TEST app_repeat 00:05:32.143 ************************************ 00:05:32.403 21:12:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.403 21:12:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:32.403 21:12:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.403 21:12:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.403 21:12:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.403 ************************************ 00:05:32.403 START TEST cpu_locks 00:05:32.403 ************************************ 00:05:32.403 21:12:50 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:32.403 * Looking for test storage... 00:05:32.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:32.403 21:12:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.403 21:12:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.403 21:12:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.663 21:12:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:32.663 21:12:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.664 21:12:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.664 21:12:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.664 21:12:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.664 --rc genhtml_branch_coverage=1 00:05:32.664 --rc genhtml_function_coverage=1 00:05:32.664 --rc genhtml_legend=1 00:05:32.664 --rc geninfo_all_blocks=1 00:05:32.664 --rc geninfo_unexecuted_blocks=1 00:05:32.664 00:05:32.664 ' 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.664 --rc genhtml_branch_coverage=1 00:05:32.664 --rc genhtml_function_coverage=1 00:05:32.664 --rc genhtml_legend=1 00:05:32.664 --rc geninfo_all_blocks=1 00:05:32.664 --rc geninfo_unexecuted_blocks=1 00:05:32.664 00:05:32.664 ' 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.664 --rc genhtml_branch_coverage=1 00:05:32.664 --rc genhtml_function_coverage=1 00:05:32.664 --rc genhtml_legend=1 00:05:32.664 --rc geninfo_all_blocks=1 00:05:32.664 --rc geninfo_unexecuted_blocks=1 00:05:32.664 00:05:32.664 ' 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.664 --rc genhtml_branch_coverage=1 00:05:32.664 --rc genhtml_function_coverage=1 00:05:32.664 --rc genhtml_legend=1 00:05:32.664 --rc geninfo_all_blocks=1 00:05:32.664 --rc geninfo_unexecuted_blocks=1 00:05:32.664 00:05:32.664 ' 00:05:32.664 21:12:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.664 21:12:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.664 21:12:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.664 21:12:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.664 21:12:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.664 ************************************ 00:05:32.664 START TEST default_locks 00:05:32.664 ************************************ 00:05:32.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58747 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58747 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58747 ']' 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.664 21:12:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.664 [2024-11-26 21:12:50.688384] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:32.664 [2024-11-26 21:12:50.688502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58747 ] 00:05:32.924 [2024-11-26 21:12:50.840677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.924 [2024-11-26 21:12:50.951721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.893 21:12:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.893 21:12:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:33.893 21:12:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58747 00:05:33.893 21:12:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58747 00:05:33.893 21:12:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58747 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58747 ']' 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58747 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58747 00:05:34.153 killing process with pid 58747 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58747' 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58747 00:05:34.153 21:12:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58747 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58747 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58747 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58747 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58747 ']' 00:05:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.694 ERROR: process (pid: 58747) is no longer running 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58747) - No such process 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.694 ************************************ 00:05:36.694 END TEST default_locks 00:05:36.694 ************************************ 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.694 00:05:36.694 real 0m3.976s 00:05:36.694 user 0m3.898s 00:05:36.694 sys 0m0.677s 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.694 21:12:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.694 21:12:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.694 21:12:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.694 21:12:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.694 21:12:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.694 ************************************ 00:05:36.694 START TEST default_locks_via_rpc 00:05:36.694 ************************************ 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58822 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58822 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58822 ']' 00:05:36.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.694 21:12:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.694 [2024-11-26 21:12:54.728740] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:36.694 [2024-11-26 21:12:54.728953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58822 ] 00:05:36.953 [2024-11-26 21:12:54.901098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.953 [2024-11-26 21:12:55.013191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58822 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58822 00:05:37.889 21:12:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58822 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58822 ']' 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58822 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58822 00:05:38.148 killing process with pid 58822 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58822' 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58822 00:05:38.148 21:12:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58822 00:05:40.683 00:05:40.683 real 0m3.924s 00:05:40.683 user 0m3.857s 00:05:40.683 sys 0m0.619s 00:05:40.683 21:12:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.683 21:12:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.683 ************************************ 00:05:40.683 END TEST default_locks_via_rpc 00:05:40.683 ************************************ 00:05:40.683 21:12:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.683 21:12:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.683 21:12:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.683 21:12:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.683 ************************************ 00:05:40.683 START TEST non_locking_app_on_locked_coremask 00:05:40.683 ************************************ 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58891 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58891 /var/tmp/spdk.sock 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.683 21:12:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.683 [2024-11-26 21:12:58.728504] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:40.683 [2024-11-26 21:12:58.728694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:05:40.942 [2024-11-26 21:12:58.885440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.942 [2024-11-26 21:12:59.000865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58912 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58912 /var/tmp/spdk2.sock 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58912 ']' 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.880 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.881 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.881 21:12:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.881 [2024-11-26 21:12:59.944356] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:41.881 [2024-11-26 21:12:59.944468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:05:42.140 [2024-11-26 21:13:00.115415] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.140 [2024-11-26 21:13:00.115466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.398 [2024-11-26 21:13:00.339899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58891 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58891 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58891 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.936 21:13:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:05:44.936 killing process with pid 58891 00:05:44.936 21:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.936 21:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.936 21:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:05:44.936 21:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58891 00:05:44.936 21:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58891 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58912 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58912 ']' 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58912 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58912 00:05:50.263 killing process with pid 58912 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58912' 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58912 00:05:50.263 21:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58912 00:05:52.176 ************************************ 00:05:52.176 END TEST non_locking_app_on_locked_coremask 00:05:52.176 ************************************ 00:05:52.176 00:05:52.176 real 0m11.378s 00:05:52.176 user 0m11.585s 00:05:52.176 sys 0m1.217s 00:05:52.176 21:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.176 21:13:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.176 21:13:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.176 21:13:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.176 21:13:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.176 21:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.176 ************************************ 00:05:52.176 START TEST locking_app_on_unlocked_coremask 00:05:52.176 ************************************ 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59057 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59057 /var/tmp/spdk.sock 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59057 ']' 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.176 21:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.176 [2024-11-26 21:13:10.170716] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:52.176 [2024-11-26 21:13:10.170832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59057 ] 00:05:52.435 [2024-11-26 21:13:10.345564] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.435 [2024-11-26 21:13:10.345696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.435 [2024-11-26 21:13:10.461645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59073 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59073 /var/tmp/spdk2.sock 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59073 ']' 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.374 21:13:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.374 [2024-11-26 21:13:11.373420] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:53.374 [2024-11-26 21:13:11.373613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59073 ] 00:05:53.633 [2024-11-26 21:13:11.543317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.633 [2024-11-26 21:13:11.769777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.170 21:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.170 21:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.170 21:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59073 00:05:56.170 21:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59073 00:05:56.170 21:13:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.170 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59057 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59057 ']' 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59057 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59057 00:05:56.171 killing process with pid 59057 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59057' 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59057 00:05:56.171 21:13:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59057 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59073 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59073 ']' 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59073 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59073 00:06:01.457 killing process with pid 59073 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59073' 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59073 00:06:01.457 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59073 00:06:03.362 00:06:03.362 real 0m11.258s 00:06:03.362 user 0m11.515s 00:06:03.362 sys 0m1.074s 00:06:03.362 ************************************ 00:06:03.362 END TEST locking_app_on_unlocked_coremask 00:06:03.362 ************************************ 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.362 21:13:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.362 21:13:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.362 21:13:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.362 21:13:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.362 ************************************ 00:06:03.362 START TEST locking_app_on_locked_coremask 00:06:03.362 ************************************ 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59222 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59222 /var/tmp/spdk.sock 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59222 ']' 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.362 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.362 [2024-11-26 21:13:21.493301] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:03.362 [2024-11-26 21:13:21.493418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:06:03.621 [2024-11-26 21:13:21.645736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.621 [2024-11-26 21:13:21.750926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59238 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59238 /var/tmp/spdk2.sock 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59238 /var/tmp/spdk2.sock 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59238 /var/tmp/spdk2.sock 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.558 21:13:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.818 [2024-11-26 21:13:22.712463] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:04.818 [2024-11-26 21:13:22.712679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:06:04.818 [2024-11-26 21:13:22.881091] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59222 has claimed it. 00:06:04.818 [2024-11-26 21:13:22.881161] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.385 ERROR: process (pid: 59238) is no longer running 00:06:05.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59238) - No such process 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59222 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59222 00:06:05.385 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59222 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59222 ']' 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59222 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59222 00:06:05.644 killing process with pid 59222 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59222' 00:06:05.644 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59222 00:06:05.645 21:13:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59222 00:06:08.238 ************************************ 00:06:08.238 END TEST locking_app_on_locked_coremask 00:06:08.238 ************************************ 00:06:08.238 00:06:08.238 real 0m4.737s 00:06:08.238 user 0m4.911s 00:06:08.238 sys 0m0.766s 00:06:08.238 21:13:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.238 21:13:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.238 21:13:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.238 21:13:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.238 21:13:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.238 21:13:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.238 ************************************ 00:06:08.238 START TEST locking_overlapped_coremask 00:06:08.238 ************************************ 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59310 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59310 /var/tmp/spdk.sock 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59310 ']' 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.238 21:13:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.238 [2024-11-26 21:13:26.295912] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:08.238 [2024-11-26 21:13:26.296048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59310 ] 00:06:08.498 [2024-11-26 21:13:26.472308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.498 [2024-11-26 21:13:26.583037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.498 [2024-11-26 21:13:26.583227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.498 [2024-11-26 21:13:26.583178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59333 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59333 /var/tmp/spdk2.sock 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59333 /var/tmp/spdk2.sock 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59333 /var/tmp/spdk2.sock 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59333 ']' 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.430 21:13:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.687 [2024-11-26 21:13:27.601056] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:09.687 [2024-11-26 21:13:27.601235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:06:09.687 [2024-11-26 21:13:27.772653] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59310 has claimed it. 00:06:09.687 [2024-11-26 21:13:27.772718] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59333) - No such process 00:06:10.251 ERROR: process (pid: 59333) is no longer running 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59310 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59310 ']' 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59310 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59310 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59310' 00:06:10.251 killing process with pid 59310 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59310 00:06:10.251 21:13:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59310 00:06:12.782 00:06:12.782 real 0m4.523s 00:06:12.782 user 0m12.369s 00:06:12.782 sys 0m0.638s 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.782 ************************************ 00:06:12.782 END TEST locking_overlapped_coremask 00:06:12.782 ************************************ 00:06:12.782 21:13:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.782 21:13:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.782 21:13:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.782 21:13:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.782 ************************************ 00:06:12.782 START TEST locking_overlapped_coremask_via_rpc 00:06:12.782 ************************************ 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59397 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59397 /var/tmp/spdk.sock 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59397 ']' 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.782 21:13:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.782 [2024-11-26 21:13:30.882593] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:12.782 [2024-11-26 21:13:30.882738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59397 ] 00:06:13.039 [2024-11-26 21:13:31.060237] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.039 [2024-11-26 21:13:31.060286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.039 [2024-11-26 21:13:31.178096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.039 [2024-11-26 21:13:31.178237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.039 [2024-11-26 21:13:31.178273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59421 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59421 /var/tmp/spdk2.sock 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59421 ']' 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.972 21:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.230 [2024-11-26 21:13:32.160910] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:14.230 [2024-11-26 21:13:32.161047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59421 ] 00:06:14.230 [2024-11-26 21:13:32.335302] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.230 [2024-11-26 21:13:32.335355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.487 [2024-11-26 21:13:32.576975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.487 [2024-11-26 21:13:32.577134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.487 [2024-11-26 21:13:32.577174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 [2024-11-26 21:13:34.757142] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59397 has claimed it. 00:06:17.014 request: 00:06:17.014 { 00:06:17.014 "method": "framework_enable_cpumask_locks", 00:06:17.014 "req_id": 1 00:06:17.014 } 00:06:17.014 Got JSON-RPC error response 00:06:17.014 response: 00:06:17.014 { 00:06:17.014 "code": -32603, 00:06:17.014 "message": "Failed to claim CPU core: 2" 00:06:17.014 } 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59397 /var/tmp/spdk.sock 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59397 ']' 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59421 /var/tmp/spdk2.sock 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59421 ']' 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.014 21:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.272 00:06:17.272 real 0m4.428s 00:06:17.272 user 0m1.327s 00:06:17.272 sys 0m0.193s 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.272 21:13:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.272 ************************************ 00:06:17.272 END TEST locking_overlapped_coremask_via_rpc 00:06:17.272 ************************************ 00:06:17.272 21:13:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.272 21:13:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59397 ]] 00:06:17.272 21:13:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59397 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59397 ']' 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59397 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59397 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59397' 00:06:17.272 killing process with pid 59397 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59397 00:06:17.272 21:13:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59397 00:06:19.939 21:13:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59421 ]] 00:06:19.939 21:13:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59421 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59421 ']' 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59421 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59421 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59421' 00:06:19.939 killing process with pid 59421 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59421 00:06:19.939 21:13:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59421 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59397 ]] 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59397 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59397 ']' 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59397 00:06:22.471 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59397) - No such process 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59397 is not found' 00:06:22.471 Process with pid 59397 is not found 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59421 ]] 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59421 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59421 ']' 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59421 00:06:22.471 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59421) - No such process 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59421 is not found' 00:06:22.471 Process with pid 59421 is not found 00:06:22.471 21:13:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.471 00:06:22.471 real 0m49.909s 00:06:22.471 user 1m26.246s 00:06:22.471 sys 0m6.363s 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.471 21:13:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.471 ************************************ 00:06:22.471 END TEST cpu_locks 00:06:22.471 ************************************ 00:06:22.471 ************************************ 00:06:22.471 END TEST event 00:06:22.471 ************************************ 00:06:22.471 00:06:22.471 real 1m19.355s 00:06:22.471 user 2m22.788s 00:06:22.471 sys 0m10.147s 00:06:22.471 21:13:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.471 21:13:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.471 21:13:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.471 21:13:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.471 21:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.471 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.471 ************************************ 00:06:22.471 START TEST thread 00:06:22.471 ************************************ 00:06:22.471 21:13:40 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.471 * Looking for test storage... 00:06:22.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:22.471 21:13:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.471 21:13:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.471 21:13:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.471 21:13:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.471 21:13:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.471 21:13:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.471 21:13:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.471 21:13:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.471 21:13:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.471 21:13:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.471 21:13:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.471 21:13:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.471 21:13:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.471 21:13:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.471 21:13:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.471 21:13:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:22.471 21:13:40 thread -- scripts/common.sh@345 -- # : 1 00:06:22.471 21:13:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.471 21:13:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.471 21:13:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:22.472 21:13:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:22.472 21:13:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.472 21:13:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:22.472 21:13:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.472 21:13:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:22.472 21:13:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:22.472 21:13:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.472 21:13:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:22.472 21:13:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.472 21:13:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.472 21:13:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.472 21:13:40 thread -- scripts/common.sh@368 -- # return 0 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.472 --rc genhtml_branch_coverage=1 00:06:22.472 --rc genhtml_function_coverage=1 00:06:22.472 --rc genhtml_legend=1 00:06:22.472 --rc geninfo_all_blocks=1 00:06:22.472 --rc geninfo_unexecuted_blocks=1 00:06:22.472 00:06:22.472 ' 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.472 --rc genhtml_branch_coverage=1 00:06:22.472 --rc genhtml_function_coverage=1 00:06:22.472 --rc genhtml_legend=1 00:06:22.472 --rc geninfo_all_blocks=1 00:06:22.472 --rc geninfo_unexecuted_blocks=1 00:06:22.472 00:06:22.472 ' 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.472 --rc genhtml_branch_coverage=1 00:06:22.472 --rc genhtml_function_coverage=1 00:06:22.472 --rc genhtml_legend=1 00:06:22.472 --rc geninfo_all_blocks=1 00:06:22.472 --rc geninfo_unexecuted_blocks=1 00:06:22.472 00:06:22.472 ' 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.472 --rc genhtml_branch_coverage=1 00:06:22.472 --rc genhtml_function_coverage=1 00:06:22.472 --rc genhtml_legend=1 00:06:22.472 --rc geninfo_all_blocks=1 00:06:22.472 --rc geninfo_unexecuted_blocks=1 00:06:22.472 00:06:22.472 ' 00:06:22.472 21:13:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.472 21:13:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.472 ************************************ 00:06:22.472 START TEST thread_poller_perf 00:06:22.472 ************************************ 00:06:22.472 21:13:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.730 [2024-11-26 21:13:40.663535] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:22.730 [2024-11-26 21:13:40.663644] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59618 ] 00:06:22.730 [2024-11-26 21:13:40.836804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.989 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.989 [2024-11-26 21:13:40.948150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.365 [2024-11-26T21:13:42.521Z] ====================================== 00:06:24.365 [2024-11-26T21:13:42.521Z] busy:2299843310 (cyc) 00:06:24.365 [2024-11-26T21:13:42.521Z] total_run_count: 402000 00:06:24.365 [2024-11-26T21:13:42.521Z] tsc_hz: 2290000000 (cyc) 00:06:24.365 [2024-11-26T21:13:42.521Z] ====================================== 00:06:24.365 [2024-11-26T21:13:42.521Z] poller_cost: 5721 (cyc), 2498 (nsec) 00:06:24.365 00:06:24.365 real 0m1.561s 00:06:24.365 user 0m1.361s 00:06:24.365 sys 0m0.094s 00:06:24.365 21:13:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.365 21:13:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.365 ************************************ 00:06:24.365 END TEST thread_poller_perf 00:06:24.365 ************************************ 00:06:24.365 21:13:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.365 21:13:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.365 21:13:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.365 21:13:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.365 ************************************ 00:06:24.365 START TEST thread_poller_perf 00:06:24.365 ************************************ 00:06:24.365 21:13:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.365 [2024-11-26 21:13:42.291786] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:24.365 [2024-11-26 21:13:42.291889] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:06:24.365 [2024-11-26 21:13:42.466230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.623 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.623 [2024-11-26 21:13:42.578111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.001 [2024-11-26T21:13:44.157Z] ====================================== 00:06:26.001 [2024-11-26T21:13:44.157Z] busy:2293483562 (cyc) 00:06:26.001 [2024-11-26T21:13:44.157Z] total_run_count: 5125000 00:06:26.001 [2024-11-26T21:13:44.157Z] tsc_hz: 2290000000 (cyc) 00:06:26.001 [2024-11-26T21:13:44.157Z] ====================================== 00:06:26.001 [2024-11-26T21:13:44.157Z] poller_cost: 447 (cyc), 195 (nsec) 00:06:26.001 00:06:26.001 real 0m1.557s 00:06:26.001 user 0m1.363s 00:06:26.001 sys 0m0.087s 00:06:26.001 21:13:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.001 21:13:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.001 ************************************ 00:06:26.001 END TEST thread_poller_perf 00:06:26.001 ************************************ 00:06:26.001 21:13:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.001 00:06:26.001 real 0m3.467s 00:06:26.001 user 0m2.901s 00:06:26.001 sys 0m0.372s 00:06:26.001 21:13:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.001 21:13:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.001 ************************************ 00:06:26.001 END TEST thread 00:06:26.001 ************************************ 00:06:26.001 21:13:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.001 21:13:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.001 21:13:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.001 21:13:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.001 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:06:26.001 ************************************ 00:06:26.001 START TEST app_cmdline 00:06:26.001 ************************************ 00:06:26.001 21:13:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.001 * Looking for test storage... 00:06:26.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.001 21:13:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.001 --rc genhtml_branch_coverage=1 00:06:26.001 --rc genhtml_function_coverage=1 00:06:26.001 --rc genhtml_legend=1 00:06:26.001 --rc geninfo_all_blocks=1 00:06:26.001 --rc geninfo_unexecuted_blocks=1 00:06:26.001 00:06:26.001 ' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.001 --rc genhtml_branch_coverage=1 00:06:26.001 --rc genhtml_function_coverage=1 00:06:26.001 --rc genhtml_legend=1 00:06:26.001 --rc geninfo_all_blocks=1 00:06:26.001 --rc geninfo_unexecuted_blocks=1 00:06:26.001 00:06:26.001 ' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.001 --rc genhtml_branch_coverage=1 00:06:26.001 --rc genhtml_function_coverage=1 00:06:26.001 --rc genhtml_legend=1 00:06:26.001 --rc geninfo_all_blocks=1 00:06:26.001 --rc geninfo_unexecuted_blocks=1 00:06:26.001 00:06:26.001 ' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.001 --rc genhtml_branch_coverage=1 00:06:26.001 --rc genhtml_function_coverage=1 00:06:26.001 --rc genhtml_legend=1 00:06:26.001 --rc geninfo_all_blocks=1 00:06:26.001 --rc geninfo_unexecuted_blocks=1 00:06:26.001 00:06:26.001 ' 00:06:26.001 21:13:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.001 21:13:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59738 00:06:26.001 21:13:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.001 21:13:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59738 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59738 ']' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.001 21:13:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.260 [2024-11-26 21:13:44.232740] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:26.260 [2024-11-26 21:13:44.232866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59738 ] 00:06:26.260 [2024-11-26 21:13:44.409134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.520 [2024-11-26 21:13:44.518490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:27.457 { 00:06:27.457 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:06:27.457 "fields": { 00:06:27.457 "major": 25, 00:06:27.457 "minor": 1, 00:06:27.457 "patch": 0, 00:06:27.457 "suffix": "-pre", 00:06:27.457 "commit": "2f2acf4eb" 00:06:27.457 } 00:06:27.457 } 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.457 21:13:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:27.457 21:13:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.716 request: 00:06:27.716 { 00:06:27.716 "method": "env_dpdk_get_mem_stats", 00:06:27.716 "req_id": 1 00:06:27.716 } 00:06:27.716 Got JSON-RPC error response 00:06:27.716 response: 00:06:27.716 { 00:06:27.716 "code": -32601, 00:06:27.716 "message": "Method not found" 00:06:27.716 } 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.716 21:13:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59738 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59738 ']' 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59738 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59738 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.716 killing process with pid 59738 00:06:27.716 21:13:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59738' 00:06:27.717 21:13:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59738 00:06:27.717 21:13:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59738 00:06:30.253 00:06:30.253 real 0m4.252s 00:06:30.253 user 0m4.471s 00:06:30.253 sys 0m0.585s 00:06:30.253 21:13:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.253 21:13:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.253 ************************************ 00:06:30.253 END TEST app_cmdline 00:06:30.253 ************************************ 00:06:30.253 21:13:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.253 21:13:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.253 21:13:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.253 21:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:30.253 ************************************ 00:06:30.253 START TEST version 00:06:30.253 ************************************ 00:06:30.253 21:13:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.253 * Looking for test storage... 00:06:30.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:30.253 21:13:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.253 21:13:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.253 21:13:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.512 21:13:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.512 21:13:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.512 21:13:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.512 21:13:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.512 21:13:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.512 21:13:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.512 21:13:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.512 21:13:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.512 21:13:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.512 21:13:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.512 21:13:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.512 21:13:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.512 21:13:48 version -- scripts/common.sh@344 -- # case "$op" in 00:06:30.512 21:13:48 version -- scripts/common.sh@345 -- # : 1 00:06:30.512 21:13:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.512 21:13:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.512 21:13:48 version -- scripts/common.sh@365 -- # decimal 1 00:06:30.512 21:13:48 version -- scripts/common.sh@353 -- # local d=1 00:06:30.512 21:13:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.512 21:13:48 version -- scripts/common.sh@355 -- # echo 1 00:06:30.512 21:13:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.512 21:13:48 version -- scripts/common.sh@366 -- # decimal 2 00:06:30.512 21:13:48 version -- scripts/common.sh@353 -- # local d=2 00:06:30.512 21:13:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.512 21:13:48 version -- scripts/common.sh@355 -- # echo 2 00:06:30.512 21:13:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.512 21:13:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.512 21:13:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.513 21:13:48 version -- scripts/common.sh@368 -- # return 0 00:06:30.513 21:13:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.513 21:13:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.513 --rc genhtml_branch_coverage=1 00:06:30.513 --rc genhtml_function_coverage=1 00:06:30.513 --rc genhtml_legend=1 00:06:30.513 --rc geninfo_all_blocks=1 00:06:30.513 --rc geninfo_unexecuted_blocks=1 00:06:30.513 00:06:30.513 ' 00:06:30.513 21:13:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.513 --rc genhtml_branch_coverage=1 00:06:30.513 --rc genhtml_function_coverage=1 00:06:30.513 --rc genhtml_legend=1 00:06:30.513 --rc geninfo_all_blocks=1 00:06:30.513 --rc geninfo_unexecuted_blocks=1 00:06:30.513 00:06:30.513 ' 00:06:30.513 21:13:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.513 --rc genhtml_branch_coverage=1 00:06:30.513 --rc genhtml_function_coverage=1 00:06:30.513 --rc genhtml_legend=1 00:06:30.513 --rc geninfo_all_blocks=1 00:06:30.513 --rc geninfo_unexecuted_blocks=1 00:06:30.513 00:06:30.513 ' 00:06:30.513 21:13:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.513 --rc genhtml_branch_coverage=1 00:06:30.513 --rc genhtml_function_coverage=1 00:06:30.513 --rc genhtml_legend=1 00:06:30.513 --rc geninfo_all_blocks=1 00:06:30.513 --rc geninfo_unexecuted_blocks=1 00:06:30.513 00:06:30.513 ' 00:06:30.513 21:13:48 version -- app/version.sh@17 -- # get_header_version major 00:06:30.513 21:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # cut -f2 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.513 21:13:48 version -- app/version.sh@17 -- # major=25 00:06:30.513 21:13:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # cut -f2 00:06:30.513 21:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.513 21:13:48 version -- app/version.sh@18 -- # minor=1 00:06:30.513 21:13:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:30.513 21:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # cut -f2 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.513 21:13:48 version -- app/version.sh@19 -- # patch=0 00:06:30.513 21:13:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # cut -f2 00:06:30.513 21:13:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.513 21:13:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.513 21:13:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:30.513 21:13:48 version -- app/version.sh@22 -- # version=25.1 00:06:30.513 21:13:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.513 21:13:48 version -- app/version.sh@28 -- # version=25.1rc0 00:06:30.513 21:13:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:30.513 21:13:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.513 21:13:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:30.513 21:13:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:30.513 00:06:30.513 real 0m0.308s 00:06:30.513 user 0m0.177s 00:06:30.513 sys 0m0.183s 00:06:30.513 21:13:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.513 21:13:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:30.513 ************************************ 00:06:30.513 END TEST version 00:06:30.513 ************************************ 00:06:30.513 21:13:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:30.513 21:13:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:30.513 21:13:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:30.513 21:13:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.513 21:13:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.513 21:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:30.513 ************************************ 00:06:30.513 START TEST bdev_raid 00:06:30.513 ************************************ 00:06:30.513 21:13:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:30.773 * Looking for test storage... 00:06:30.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.773 21:13:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.773 --rc genhtml_branch_coverage=1 00:06:30.773 --rc genhtml_function_coverage=1 00:06:30.773 --rc genhtml_legend=1 00:06:30.773 --rc geninfo_all_blocks=1 00:06:30.773 --rc geninfo_unexecuted_blocks=1 00:06:30.773 00:06:30.773 ' 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.773 --rc genhtml_branch_coverage=1 00:06:30.773 --rc genhtml_function_coverage=1 00:06:30.773 --rc genhtml_legend=1 00:06:30.773 --rc geninfo_all_blocks=1 00:06:30.773 --rc geninfo_unexecuted_blocks=1 00:06:30.773 00:06:30.773 ' 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.773 --rc genhtml_branch_coverage=1 00:06:30.773 --rc genhtml_function_coverage=1 00:06:30.773 --rc genhtml_legend=1 00:06:30.773 --rc geninfo_all_blocks=1 00:06:30.773 --rc geninfo_unexecuted_blocks=1 00:06:30.773 00:06:30.773 ' 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.773 --rc genhtml_branch_coverage=1 00:06:30.773 --rc genhtml_function_coverage=1 00:06:30.773 --rc genhtml_legend=1 00:06:30.773 --rc geninfo_all_blocks=1 00:06:30.773 --rc geninfo_unexecuted_blocks=1 00:06:30.773 00:06:30.773 ' 00:06:30.773 21:13:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:30.773 21:13:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.773 21:13:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:30.773 21:13:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:30.773 21:13:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:30.773 21:13:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:30.773 21:13:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.773 21:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.773 ************************************ 00:06:30.773 START TEST raid1_resize_data_offset_test 00:06:30.773 ************************************ 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:30.773 Process raid pid: 59931 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59931 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59931' 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59931 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59931 ']' 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.773 21:13:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.773 [2024-11-26 21:13:48.922020] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:30.773 [2024-11-26 21:13:48.922152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.041 [2024-11-26 21:13:49.095355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.309 [2024-11-26 21:13:49.207278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.309 [2024-11-26 21:13:49.407004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.309 [2024-11-26 21:13:49.407045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.878 malloc0 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.878 malloc1 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.878 null0 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.878 [2024-11-26 21:13:49.928298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:31.878 [2024-11-26 21:13:49.930008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:31.878 [2024-11-26 21:13:49.930063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:31.878 [2024-11-26 21:13:49.930207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:31.878 [2024-11-26 21:13:49.930220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:31.878 [2024-11-26 21:13:49.930468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:31.878 [2024-11-26 21:13:49.930645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:31.878 [2024-11-26 21:13:49.930664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:31.878 [2024-11-26 21:13:49.930814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.878 [2024-11-26 21:13:49.984278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.878 21:13:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.447 malloc2 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.447 [2024-11-26 21:13:50.523415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:32.447 [2024-11-26 21:13:50.539540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.447 [2024-11-26 21:13:50.541428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59931 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59931 ']' 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59931 00:06:32.447 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59931 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.707 killing process with pid 59931 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59931' 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59931 00:06:32.707 21:13:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59931 00:06:32.707 [2024-11-26 21:13:50.637218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.707 [2024-11-26 21:13:50.638908] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:32.707 [2024-11-26 21:13:50.638988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.707 [2024-11-26 21:13:50.639007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:32.707 [2024-11-26 21:13:50.674570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.707 [2024-11-26 21:13:50.674927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.707 [2024-11-26 21:13:50.674947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:34.618 [2024-11-26 21:13:52.403332] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.558 21:13:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:35.558 00:06:35.558 real 0m4.670s 00:06:35.558 user 0m4.602s 00:06:35.558 sys 0m0.509s 00:06:35.558 21:13:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.558 21:13:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.558 ************************************ 00:06:35.558 END TEST raid1_resize_data_offset_test 00:06:35.558 ************************************ 00:06:35.558 21:13:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:35.558 21:13:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.558 21:13:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.558 21:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.558 ************************************ 00:06:35.558 START TEST raid0_resize_superblock_test 00:06:35.558 ************************************ 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60009 00:06:35.558 Process raid pid: 60009 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60009' 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60009 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.558 21:13:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.558 [2024-11-26 21:13:53.660137] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:35.558 [2024-11-26 21:13:53.660277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.818 [2024-11-26 21:13:53.836432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.818 [2024-11-26 21:13:53.947878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.077 [2024-11-26 21:13:54.137906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.077 [2024-11-26 21:13:54.137965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.646 21:13:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.646 21:13:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:36.646 21:13:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:36.646 21:13:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.646 21:13:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.906 malloc0 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.906 [2024-11-26 21:13:55.032586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:36.906 [2024-11-26 21:13:55.032690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.906 [2024-11-26 21:13:55.032715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:36.906 [2024-11-26 21:13:55.032726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.906 [2024-11-26 21:13:55.034751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.906 [2024-11-26 21:13:55.034790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:36.906 pt0 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.906 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.165 8cf87956-6b40-4e70-83d8-7cfc6d8086d1 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.165 229b61d2-44ab-4cb0-9f15-521b235091de 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.165 4865f375-42a4-481b-8679-f9546647e409 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.165 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.165 [2024-11-26 21:13:55.165186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 229b61d2-44ab-4cb0-9f15-521b235091de is claimed 00:06:37.165 [2024-11-26 21:13:55.165344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4865f375-42a4-481b-8679-f9546647e409 is claimed 00:06:37.165 [2024-11-26 21:13:55.165485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:37.165 [2024-11-26 21:13:55.165501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:37.165 [2024-11-26 21:13:55.165769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:37.165 [2024-11-26 21:13:55.166008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:37.165 [2024-11-26 21:13:55.166021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:37.165 [2024-11-26 21:13:55.166170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:37.166 [2024-11-26 21:13:55.277223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.166 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.426 [2024-11-26 21:13:55.325156] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:37.426 [2024-11-26 21:13:55.325186] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '229b61d2-44ab-4cb0-9f15-521b235091de' was resized: old size 131072, new size 204800 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.426 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.426 [2024-11-26 21:13:55.333042] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:37.427 [2024-11-26 21:13:55.333065] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4865f375-42a4-481b-8679-f9546647e409' was resized: old size 131072, new size 204800 00:06:37.427 [2024-11-26 21:13:55.333093] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 [2024-11-26 21:13:55.429060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 [2024-11-26 21:13:55.476724] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:37.427 [2024-11-26 21:13:55.476814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:37.427 [2024-11-26 21:13:55.476830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:37.427 [2024-11-26 21:13:55.476844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:37.427 [2024-11-26 21:13:55.476978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.427 [2024-11-26 21:13:55.477013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.427 [2024-11-26 21:13:55.477025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 [2024-11-26 21:13:55.484579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.427 [2024-11-26 21:13:55.484634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.427 [2024-11-26 21:13:55.484653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:37.427 [2024-11-26 21:13:55.484663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.427 [2024-11-26 21:13:55.486816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.427 [2024-11-26 21:13:55.486898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.427 [2024-11-26 21:13:55.488634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 229b61d2-44ab-4cb0-9f15-521b235091de 00:06:37.427 [2024-11-26 21:13:55.488716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 229b61d2-44ab-4cb0-9f15-521b235091de is claimed 00:06:37.427 [2024-11-26 21:13:55.488839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4865f375-42a4-481b-8679-f9546647e409 00:06:37.427 [2024-11-26 21:13:55.488857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4865f375-42a4-481b-8679-f9546647e409 is claimed 00:06:37.427 [2024-11-26 21:13:55.489039] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4865f375-42a4-481b-8679-f9546647e409 (2) smaller than existing raid bdev Raid (3) 00:06:37.427 [2024-11-26 21:13:55.489066] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 229b61d2-44ab-4cb0-9f15-521b235091de: File exists 00:06:37.427 [2024-11-26 21:13:55.489100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:37.427 [2024-11-26 21:13:55.489111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:37.427 pt0 00:06:37.427 [2024-11-26 21:13:55.489372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:37.427 [2024-11-26 21:13:55.489530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:37.427 [2024-11-26 21:13:55.489538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:37.427 [2024-11-26 21:13:55.489686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:37.427 [2024-11-26 21:13:55.504884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60009 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60009 ']' 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60009 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.427 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60009 00:06:37.687 killing process with pid 60009 00:06:37.687 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.687 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.687 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60009' 00:06:37.687 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60009 00:06:37.687 21:13:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60009 00:06:37.687 [2024-11-26 21:13:55.585302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.687 [2024-11-26 21:13:55.585389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.687 [2024-11-26 21:13:55.585438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.687 [2024-11-26 21:13:55.585453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:39.068 [2024-11-26 21:13:56.953773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.008 21:13:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:40.008 00:06:40.008 real 0m4.456s 00:06:40.008 user 0m4.662s 00:06:40.008 sys 0m0.534s 00:06:40.008 ************************************ 00:06:40.008 END TEST raid0_resize_superblock_test 00:06:40.008 ************************************ 00:06:40.008 21:13:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.008 21:13:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.008 21:13:58 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:40.008 21:13:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.008 21:13:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.008 21:13:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.008 ************************************ 00:06:40.008 START TEST raid1_resize_superblock_test 00:06:40.008 ************************************ 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:40.008 Process raid pid: 60108 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60108 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60108' 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60108 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60108 ']' 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.008 21:13:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.269 [2024-11-26 21:13:58.181031] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:40.269 [2024-11-26 21:13:58.181138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.269 [2024-11-26 21:13:58.349300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.529 [2024-11-26 21:13:58.459539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.529 [2024-11-26 21:13:58.649784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.529 [2024-11-26 21:13:58.649831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.099 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.099 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:41.099 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:41.099 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.099 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.668 malloc0 00:06:41.668 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.668 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:41.668 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.668 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.668 [2024-11-26 21:13:59.534011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.668 [2024-11-26 21:13:59.534086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.668 [2024-11-26 21:13:59.534109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:41.668 [2024-11-26 21:13:59.534123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.668 [2024-11-26 21:13:59.536318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.668 [2024-11-26 21:13:59.536443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.668 pt0 00:06:41.668 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 1aca7d19-1db0-4818-9f88-4cb59b9c1c9e 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 807a4d36-eed9-4983-ab56-0fc7b42753fd 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 f8fa3794-5a90-40b4-bf35-202679f29936 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 [2024-11-26 21:13:59.666185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 807a4d36-eed9-4983-ab56-0fc7b42753fd is claimed 00:06:41.669 [2024-11-26 21:13:59.666317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8fa3794-5a90-40b4-bf35-202679f29936 is claimed 00:06:41.669 [2024-11-26 21:13:59.666476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:41.669 [2024-11-26 21:13:59.666493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:41.669 [2024-11-26 21:13:59.666792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:41.669 [2024-11-26 21:13:59.667036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:41.669 [2024-11-26 21:13:59.667052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:41.669 [2024-11-26 21:13:59.667236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:41.669 [2024-11-26 21:13:59.778234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.669 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.930 [2024-11-26 21:13:59.826132] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.930 [2024-11-26 21:13:59.826164] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '807a4d36-eed9-4983-ab56-0fc7b42753fd' was resized: old size 131072, new size 204800 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.930 [2024-11-26 21:13:59.834100] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.930 [2024-11-26 21:13:59.834129] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f8fa3794-5a90-40b4-bf35-202679f29936' was resized: old size 131072, new size 204800 00:06:41.930 [2024-11-26 21:13:59.834162] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:41.930 [2024-11-26 21:13:59.949941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.930 21:13:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.930 [2024-11-26 21:13:59.997639] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:41.931 [2024-11-26 21:13:59.997784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:41.931 [2024-11-26 21:13:59.997833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:41.931 [2024-11-26 21:13:59.998031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.931 [2024-11-26 21:13:59.998294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.931 [2024-11-26 21:13:59.998409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.931 [2024-11-26 21:13:59.998461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.931 [2024-11-26 21:14:00.005498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.931 [2024-11-26 21:14:00.005594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.931 [2024-11-26 21:14:00.005629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:41.931 [2024-11-26 21:14:00.005660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.931 [2024-11-26 21:14:00.007815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.931 [2024-11-26 21:14:00.007895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.931 [2024-11-26 21:14:00.009568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 807a4d36-eed9-4983-ab56-0fc7b42753fd 00:06:41.931 [2024-11-26 21:14:00.009701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 807a4d36-eed9-4983-ab56-0fc7b42753fd is claimed 00:06:41.931 [2024-11-26 21:14:00.009871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f8fa3794-5a90-40b4-bf35-202679f29936 00:06:41.931 [2024-11-26 21:14:00.009931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8fa3794-5a90-40b4-bf35-202679f29936 is claimed 00:06:41.931 [2024-11-26 21:14:00.010136] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f8fa3794-5a90-40b4-bf35-202679f29936 (2) smaller than existing raid bdev Raid (3) 00:06:41.931 [2024-11-26 21:14:00.010203] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 807a4d36-eed9-4983-ab56-0fc7b42753fd: File exists 00:06:41.931 [2024-11-26 21:14:00.010276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:41.931 [2024-11-26 21:14:00.010312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:41.931 pt0 00:06:41.931 [2024-11-26 21:14:00.010576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:41.931 [2024-11-26 21:14:00.010764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:41.931 [2024-11-26 21:14:00.010803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.931 [2024-11-26 21:14:00.011001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.931 [2024-11-26 21:14:00.025874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60108 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60108 ']' 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60108 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.931 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60108 00:06:42.192 killing process with pid 60108 00:06:42.192 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.192 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.192 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60108' 00:06:42.192 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60108 00:06:42.192 21:14:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60108 00:06:42.192 [2024-11-26 21:14:00.108699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.192 [2024-11-26 21:14:00.108804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.192 [2024-11-26 21:14:00.108875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.192 [2024-11-26 21:14:00.108889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.575 [2024-11-26 21:14:01.460681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:44.514 ************************************ 00:06:44.514 END TEST raid1_resize_superblock_test 00:06:44.514 ************************************ 00:06:44.514 21:14:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:44.514 00:06:44.514 real 0m4.437s 00:06:44.514 user 0m4.653s 00:06:44.514 sys 0m0.543s 00:06:44.514 21:14:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.514 21:14:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.514 21:14:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:44.514 21:14:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:44.514 21:14:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:44.514 21:14:02 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:44.514 21:14:02 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:44.514 21:14:02 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:44.514 21:14:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.514 21:14:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.514 21:14:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.514 ************************************ 00:06:44.514 START TEST raid_function_test_raid0 00:06:44.514 ************************************ 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:44.514 Process raid pid: 60210 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60210 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60210' 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60210 00:06:44.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60210 ']' 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.514 21:14:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:44.774 [2024-11-26 21:14:02.708775] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:44.774 [2024-11-26 21:14:02.709008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.774 [2024-11-26 21:14:02.884883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.033 [2024-11-26 21:14:02.996848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.293 [2024-11-26 21:14:03.192451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.293 [2024-11-26 21:14:03.192588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.562 Base_1 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.562 Base_2 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:45.562 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.563 [2024-11-26 21:14:03.631261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:45.563 [2024-11-26 21:14:03.633042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:45.563 [2024-11-26 21:14:03.633192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:45.563 [2024-11-26 21:14:03.633208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:45.563 [2024-11-26 21:14:03.633485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:45.563 [2024-11-26 21:14:03.633642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:45.563 [2024-11-26 21:14:03.633651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:45.563 [2024-11-26 21:14:03.633828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:45.563 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:45.834 [2024-11-26 21:14:03.850944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:45.834 /dev/nbd0 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.834 1+0 records in 00:06:45.834 1+0 records out 00:06:45.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307857 s, 13.3 MB/s 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:45.834 21:14:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.097 { 00:06:46.097 "nbd_device": "/dev/nbd0", 00:06:46.097 "bdev_name": "raid" 00:06:46.097 } 00:06:46.097 ]' 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.097 { 00:06:46.097 "nbd_device": "/dev/nbd0", 00:06:46.097 "bdev_name": "raid" 00:06:46.097 } 00:06:46.097 ]' 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:46.097 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:46.098 4096+0 records in 00:06:46.098 4096+0 records out 00:06:46.098 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0346758 s, 60.5 MB/s 00:06:46.098 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:46.360 4096+0 records in 00:06:46.360 4096+0 records out 00:06:46.360 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.196358 s, 10.7 MB/s 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:46.360 128+0 records in 00:06:46.360 128+0 records out 00:06:46.360 65536 bytes (66 kB, 64 KiB) copied, 0.00120731 s, 54.3 MB/s 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:46.360 2035+0 records in 00:06:46.360 2035+0 records out 00:06:46.360 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142179 s, 73.3 MB/s 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:46.360 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.619 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.619 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:46.620 456+0 records in 00:06:46.620 456+0 records out 00:06:46.620 233472 bytes (233 kB, 228 KiB) copied, 0.0038003 s, 61.4 MB/s 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.620 [2024-11-26 21:14:04.763309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.620 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:46.879 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.879 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:46.879 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.879 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.879 21:14:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60210 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60210 ']' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60210 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60210 00:06:47.139 killing process with pid 60210 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60210' 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60210 00:06:47.139 [2024-11-26 21:14:05.083804] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.139 21:14:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60210 00:06:47.139 [2024-11-26 21:14:05.083924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.139 [2024-11-26 21:14:05.083986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.139 [2024-11-26 21:14:05.084001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:47.139 [2024-11-26 21:14:05.282985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.521 ************************************ 00:06:48.521 END TEST raid_function_test_raid0 00:06:48.521 ************************************ 00:06:48.521 21:14:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:48.521 00:06:48.521 real 0m3.735s 00:06:48.521 user 0m4.365s 00:06:48.521 sys 0m0.891s 00:06:48.521 21:14:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.521 21:14:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:48.521 21:14:06 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:48.521 21:14:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.521 21:14:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.521 21:14:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.521 ************************************ 00:06:48.521 START TEST raid_function_test_concat 00:06:48.521 ************************************ 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60334 00:06:48.521 Process raid pid: 60334 00:06:48.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60334' 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60334 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60334 ']' 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:48.521 21:14:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.521 [2024-11-26 21:14:06.537934] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:48.521 [2024-11-26 21:14:06.539492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.781 [2024-11-26 21:14:06.733673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.781 [2024-11-26 21:14:06.847730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.040 [2024-11-26 21:14:07.044436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.040 [2024-11-26 21:14:07.044483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.300 Base_1 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.300 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.560 Base_2 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.560 [2024-11-26 21:14:07.465488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.560 [2024-11-26 21:14:07.467315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.560 [2024-11-26 21:14:07.467401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.560 [2024-11-26 21:14:07.467413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.560 [2024-11-26 21:14:07.467704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.560 [2024-11-26 21:14:07.467884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.560 [2024-11-26 21:14:07.467894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:49.560 [2024-11-26 21:14:07.468068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.560 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.561 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:49.561 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:49.561 [2024-11-26 21:14:07.705128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:49.820 /dev/nbd0 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.820 1+0 records in 00:06:49.820 1+0 records out 00:06:49.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366564 s, 11.2 MB/s 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.820 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.078 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.078 { 00:06:50.078 "nbd_device": "/dev/nbd0", 00:06:50.078 "bdev_name": "raid" 00:06:50.078 } 00:06:50.078 ]' 00:06:50.078 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.078 { 00:06:50.078 "nbd_device": "/dev/nbd0", 00:06:50.078 "bdev_name": "raid" 00:06:50.078 } 00:06:50.078 ]' 00:06:50.078 21:14:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:50.078 4096+0 records in 00:06:50.078 4096+0 records out 00:06:50.078 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0356769 s, 58.8 MB/s 00:06:50.078 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:50.337 4096+0 records in 00:06:50.337 4096+0 records out 00:06:50.337 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.201175 s, 10.4 MB/s 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:50.337 128+0 records in 00:06:50.337 128+0 records out 00:06:50.337 65536 bytes (66 kB, 64 KiB) copied, 0.00109004 s, 60.1 MB/s 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:50.337 2035+0 records in 00:06:50.337 2035+0 records out 00:06:50.337 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138258 s, 75.4 MB/s 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:50.337 456+0 records in 00:06:50.337 456+0 records out 00:06:50.337 233472 bytes (233 kB, 228 KiB) copied, 0.00381034 s, 61.3 MB/s 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.337 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.597 [2024-11-26 21:14:08.613044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.597 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60334 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60334 ']' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60334 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60334 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.856 killing process with pid 60334 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60334' 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60334 00:06:50.856 [2024-11-26 21:14:08.929455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.856 21:14:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60334 00:06:50.856 [2024-11-26 21:14:08.929561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.856 [2024-11-26 21:14:08.929612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.856 [2024-11-26 21:14:08.929623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:51.115 [2024-11-26 21:14:09.128698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.051 21:14:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:52.051 00:06:52.051 real 0m3.774s 00:06:52.051 user 0m4.370s 00:06:52.051 sys 0m0.970s 00:06:52.051 21:14:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.051 ************************************ 00:06:52.051 END TEST raid_function_test_concat 00:06:52.051 ************************************ 00:06:52.051 21:14:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.360 21:14:10 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:52.360 21:14:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.360 21:14:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.360 21:14:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.360 ************************************ 00:06:52.360 START TEST raid0_resize_test 00:06:52.360 ************************************ 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60462 00:06:52.360 Process raid pid: 60462 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60462' 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60462 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60462 ']' 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.360 21:14:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.360 [2024-11-26 21:14:10.354607] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:52.360 [2024-11-26 21:14:10.354729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.619 [2024-11-26 21:14:10.531094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.619 [2024-11-26 21:14:10.642051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.879 [2024-11-26 21:14:10.837078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.879 [2024-11-26 21:14:10.837125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.139 Base_1 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.139 Base_2 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.139 [2024-11-26 21:14:11.209980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:53.139 [2024-11-26 21:14:11.211846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:53.139 [2024-11-26 21:14:11.211938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:53.139 [2024-11-26 21:14:11.211951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:53.139 [2024-11-26 21:14:11.212256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:53.139 [2024-11-26 21:14:11.212399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:53.139 [2024-11-26 21:14:11.212418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:53.139 [2024-11-26 21:14:11.212590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.139 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.140 [2024-11-26 21:14:11.217921] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.140 [2024-11-26 21:14:11.217955] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:53.140 true 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.140 [2024-11-26 21:14:11.234106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.140 [2024-11-26 21:14:11.277828] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.140 [2024-11-26 21:14:11.277867] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:53.140 [2024-11-26 21:14:11.277896] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:53.140 true 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.140 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.400 [2024-11-26 21:14:11.293983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60462 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60462 ']' 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60462 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60462 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.400 killing process with pid 60462 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60462' 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60462 00:06:53.400 [2024-11-26 21:14:11.353451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.400 21:14:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60462 00:06:53.400 [2024-11-26 21:14:11.353551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.400 [2024-11-26 21:14:11.353613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.400 [2024-11-26 21:14:11.353623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:53.400 [2024-11-26 21:14:11.370827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.339 21:14:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:54.339 00:06:54.339 real 0m2.176s 00:06:54.339 user 0m2.289s 00:06:54.339 sys 0m0.340s 00:06:54.339 21:14:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.339 21:14:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.339 ************************************ 00:06:54.339 END TEST raid0_resize_test 00:06:54.339 ************************************ 00:06:54.339 21:14:12 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:54.339 21:14:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.339 21:14:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.339 21:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.599 ************************************ 00:06:54.599 START TEST raid1_resize_test 00:06:54.599 ************************************ 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60518 00:06:54.599 Process raid pid: 60518 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60518' 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60518 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60518 ']' 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.599 21:14:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.599 [2024-11-26 21:14:12.590999] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:54.599 [2024-11-26 21:14:12.591116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.599 [2024-11-26 21:14:12.751934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.859 [2024-11-26 21:14:12.864765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.119 [2024-11-26 21:14:13.054598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.119 [2024-11-26 21:14:13.054657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.379 Base_1 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.379 Base_2 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.379 [2024-11-26 21:14:13.446583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:55.379 [2024-11-26 21:14:13.448319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:55.379 [2024-11-26 21:14:13.448399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.379 [2024-11-26 21:14:13.448411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:55.379 [2024-11-26 21:14:13.448652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.379 [2024-11-26 21:14:13.448775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.379 [2024-11-26 21:14:13.448788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.379 [2024-11-26 21:14:13.448923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.379 [2024-11-26 21:14:13.458547] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.379 [2024-11-26 21:14:13.458580] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:55.379 true 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.379 [2024-11-26 21:14:13.474686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:55.379 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.380 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.380 [2024-11-26 21:14:13.518482] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.380 [2024-11-26 21:14:13.518522] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:55.380 [2024-11-26 21:14:13.518549] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:55.380 true 00:06:55.380 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.380 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:55.380 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.380 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.380 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.640 [2024-11-26 21:14:13.534612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60518 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60518 ']' 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60518 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60518 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.640 killing process with pid 60518 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60518' 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60518 00:06:55.640 [2024-11-26 21:14:13.583642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.640 [2024-11-26 21:14:13.583740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.640 21:14:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60518 00:06:55.640 [2024-11-26 21:14:13.584253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.640 [2024-11-26 21:14:13.584281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.640 [2024-11-26 21:14:13.600879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.579 21:14:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:56.579 00:06:56.579 real 0m2.156s 00:06:56.579 user 0m2.284s 00:06:56.579 sys 0m0.317s 00:06:56.579 21:14:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.579 21:14:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.579 ************************************ 00:06:56.579 END TEST raid1_resize_test 00:06:56.579 ************************************ 00:06:56.579 21:14:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:56.579 21:14:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:56.579 21:14:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:56.579 21:14:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.579 21:14:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.579 21:14:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.579 ************************************ 00:06:56.579 START TEST raid_state_function_test 00:06:56.579 ************************************ 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.579 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:56.580 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:56.580 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.580 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:56.580 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:56.580 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:56.839 Process raid pid: 60575 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60575 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60575' 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60575 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60575 ']' 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.839 21:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.839 [2024-11-26 21:14:14.821798] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:56.839 [2024-11-26 21:14:14.821909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.098 [2024-11-26 21:14:14.993896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.098 [2024-11-26 21:14:15.104626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.358 [2024-11-26 21:14:15.304658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.358 [2024-11-26 21:14:15.304694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.618 [2024-11-26 21:14:15.649769] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.618 [2024-11-26 21:14:15.649815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.618 [2024-11-26 21:14:15.649826] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.618 [2024-11-26 21:14:15.649836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.618 "name": "Existed_Raid", 00:06:57.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.618 "strip_size_kb": 64, 00:06:57.618 "state": "configuring", 00:06:57.618 "raid_level": "raid0", 00:06:57.618 "superblock": false, 00:06:57.618 "num_base_bdevs": 2, 00:06:57.618 "num_base_bdevs_discovered": 0, 00:06:57.618 "num_base_bdevs_operational": 2, 00:06:57.618 "base_bdevs_list": [ 00:06:57.618 { 00:06:57.618 "name": "BaseBdev1", 00:06:57.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.618 "is_configured": false, 00:06:57.618 "data_offset": 0, 00:06:57.618 "data_size": 0 00:06:57.618 }, 00:06:57.618 { 00:06:57.618 "name": "BaseBdev2", 00:06:57.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.618 "is_configured": false, 00:06:57.618 "data_offset": 0, 00:06:57.618 "data_size": 0 00:06:57.618 } 00:06:57.618 ] 00:06:57.618 }' 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.618 21:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.189 [2024-11-26 21:14:16.104932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.189 [2024-11-26 21:14:16.104979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.189 [2024-11-26 21:14:16.116892] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.189 [2024-11-26 21:14:16.116929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.189 [2024-11-26 21:14:16.116938] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.189 [2024-11-26 21:14:16.116949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.189 [2024-11-26 21:14:16.160867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.189 BaseBdev1 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.189 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.190 [ 00:06:58.190 { 00:06:58.190 "name": "BaseBdev1", 00:06:58.190 "aliases": [ 00:06:58.190 "a4424755-921c-4df6-8fad-84ab24a69aa5" 00:06:58.190 ], 00:06:58.190 "product_name": "Malloc disk", 00:06:58.190 "block_size": 512, 00:06:58.190 "num_blocks": 65536, 00:06:58.190 "uuid": "a4424755-921c-4df6-8fad-84ab24a69aa5", 00:06:58.190 "assigned_rate_limits": { 00:06:58.190 "rw_ios_per_sec": 0, 00:06:58.190 "rw_mbytes_per_sec": 0, 00:06:58.190 "r_mbytes_per_sec": 0, 00:06:58.190 "w_mbytes_per_sec": 0 00:06:58.190 }, 00:06:58.190 "claimed": true, 00:06:58.190 "claim_type": "exclusive_write", 00:06:58.190 "zoned": false, 00:06:58.190 "supported_io_types": { 00:06:58.190 "read": true, 00:06:58.190 "write": true, 00:06:58.190 "unmap": true, 00:06:58.190 "flush": true, 00:06:58.190 "reset": true, 00:06:58.190 "nvme_admin": false, 00:06:58.190 "nvme_io": false, 00:06:58.190 "nvme_io_md": false, 00:06:58.190 "write_zeroes": true, 00:06:58.190 "zcopy": true, 00:06:58.190 "get_zone_info": false, 00:06:58.190 "zone_management": false, 00:06:58.190 "zone_append": false, 00:06:58.190 "compare": false, 00:06:58.190 "compare_and_write": false, 00:06:58.190 "abort": true, 00:06:58.190 "seek_hole": false, 00:06:58.190 "seek_data": false, 00:06:58.190 "copy": true, 00:06:58.190 "nvme_iov_md": false 00:06:58.190 }, 00:06:58.190 "memory_domains": [ 00:06:58.190 { 00:06:58.190 "dma_device_id": "system", 00:06:58.190 "dma_device_type": 1 00:06:58.190 }, 00:06:58.190 { 00:06:58.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.190 "dma_device_type": 2 00:06:58.190 } 00:06:58.190 ], 00:06:58.190 "driver_specific": {} 00:06:58.190 } 00:06:58.190 ] 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.190 "name": "Existed_Raid", 00:06:58.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.190 "strip_size_kb": 64, 00:06:58.190 "state": "configuring", 00:06:58.190 "raid_level": "raid0", 00:06:58.190 "superblock": false, 00:06:58.190 "num_base_bdevs": 2, 00:06:58.190 "num_base_bdevs_discovered": 1, 00:06:58.190 "num_base_bdevs_operational": 2, 00:06:58.190 "base_bdevs_list": [ 00:06:58.190 { 00:06:58.190 "name": "BaseBdev1", 00:06:58.190 "uuid": "a4424755-921c-4df6-8fad-84ab24a69aa5", 00:06:58.190 "is_configured": true, 00:06:58.190 "data_offset": 0, 00:06:58.190 "data_size": 65536 00:06:58.190 }, 00:06:58.190 { 00:06:58.190 "name": "BaseBdev2", 00:06:58.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.190 "is_configured": false, 00:06:58.190 "data_offset": 0, 00:06:58.190 "data_size": 0 00:06:58.190 } 00:06:58.190 ] 00:06:58.190 }' 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.190 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 [2024-11-26 21:14:16.664070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.761 [2024-11-26 21:14:16.664122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 [2024-11-26 21:14:16.676090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.761 [2024-11-26 21:14:16.677950] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.761 [2024-11-26 21:14:16.678037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.761 "name": "Existed_Raid", 00:06:58.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.761 "strip_size_kb": 64, 00:06:58.761 "state": "configuring", 00:06:58.761 "raid_level": "raid0", 00:06:58.761 "superblock": false, 00:06:58.761 "num_base_bdevs": 2, 00:06:58.761 "num_base_bdevs_discovered": 1, 00:06:58.761 "num_base_bdevs_operational": 2, 00:06:58.761 "base_bdevs_list": [ 00:06:58.761 { 00:06:58.761 "name": "BaseBdev1", 00:06:58.761 "uuid": "a4424755-921c-4df6-8fad-84ab24a69aa5", 00:06:58.761 "is_configured": true, 00:06:58.761 "data_offset": 0, 00:06:58.761 "data_size": 65536 00:06:58.761 }, 00:06:58.761 { 00:06:58.761 "name": "BaseBdev2", 00:06:58.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.761 "is_configured": false, 00:06:58.761 "data_offset": 0, 00:06:58.761 "data_size": 0 00:06:58.761 } 00:06:58.761 ] 00:06:58.761 }' 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.761 21:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 [2024-11-26 21:14:17.121752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:59.022 [2024-11-26 21:14:17.121865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:59.022 [2024-11-26 21:14:17.121892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:59.022 [2024-11-26 21:14:17.122227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:59.022 [2024-11-26 21:14:17.122458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:59.022 [2024-11-26 21:14:17.122503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:59.022 [2024-11-26 21:14:17.122789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.022 BaseBdev2 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 [ 00:06:59.022 { 00:06:59.022 "name": "BaseBdev2", 00:06:59.022 "aliases": [ 00:06:59.022 "4312c071-4ec9-4209-82ca-b8d9e65626b3" 00:06:59.022 ], 00:06:59.022 "product_name": "Malloc disk", 00:06:59.022 "block_size": 512, 00:06:59.022 "num_blocks": 65536, 00:06:59.022 "uuid": "4312c071-4ec9-4209-82ca-b8d9e65626b3", 00:06:59.022 "assigned_rate_limits": { 00:06:59.022 "rw_ios_per_sec": 0, 00:06:59.022 "rw_mbytes_per_sec": 0, 00:06:59.022 "r_mbytes_per_sec": 0, 00:06:59.022 "w_mbytes_per_sec": 0 00:06:59.022 }, 00:06:59.022 "claimed": true, 00:06:59.022 "claim_type": "exclusive_write", 00:06:59.022 "zoned": false, 00:06:59.022 "supported_io_types": { 00:06:59.022 "read": true, 00:06:59.022 "write": true, 00:06:59.022 "unmap": true, 00:06:59.022 "flush": true, 00:06:59.022 "reset": true, 00:06:59.022 "nvme_admin": false, 00:06:59.022 "nvme_io": false, 00:06:59.022 "nvme_io_md": false, 00:06:59.022 "write_zeroes": true, 00:06:59.022 "zcopy": true, 00:06:59.022 "get_zone_info": false, 00:06:59.022 "zone_management": false, 00:06:59.022 "zone_append": false, 00:06:59.022 "compare": false, 00:06:59.022 "compare_and_write": false, 00:06:59.022 "abort": true, 00:06:59.022 "seek_hole": false, 00:06:59.022 "seek_data": false, 00:06:59.022 "copy": true, 00:06:59.022 "nvme_iov_md": false 00:06:59.022 }, 00:06:59.022 "memory_domains": [ 00:06:59.022 { 00:06:59.022 "dma_device_id": "system", 00:06:59.022 "dma_device_type": 1 00:06:59.022 }, 00:06:59.022 { 00:06:59.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.022 "dma_device_type": 2 00:06:59.022 } 00:06:59.022 ], 00:06:59.022 "driver_specific": {} 00:06:59.022 } 00:06:59.022 ] 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.022 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.023 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.331 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.331 "name": "Existed_Raid", 00:06:59.331 "uuid": "7f2184a9-9b1d-493d-a814-cc868a400fe9", 00:06:59.331 "strip_size_kb": 64, 00:06:59.331 "state": "online", 00:06:59.331 "raid_level": "raid0", 00:06:59.331 "superblock": false, 00:06:59.331 "num_base_bdevs": 2, 00:06:59.331 "num_base_bdevs_discovered": 2, 00:06:59.331 "num_base_bdevs_operational": 2, 00:06:59.331 "base_bdevs_list": [ 00:06:59.331 { 00:06:59.331 "name": "BaseBdev1", 00:06:59.331 "uuid": "a4424755-921c-4df6-8fad-84ab24a69aa5", 00:06:59.331 "is_configured": true, 00:06:59.331 "data_offset": 0, 00:06:59.331 "data_size": 65536 00:06:59.331 }, 00:06:59.331 { 00:06:59.331 "name": "BaseBdev2", 00:06:59.331 "uuid": "4312c071-4ec9-4209-82ca-b8d9e65626b3", 00:06:59.331 "is_configured": true, 00:06:59.331 "data_offset": 0, 00:06:59.331 "data_size": 65536 00:06:59.331 } 00:06:59.331 ] 00:06:59.331 }' 00:06:59.331 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.331 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.608 [2024-11-26 21:14:17.589388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:59.608 "name": "Existed_Raid", 00:06:59.608 "aliases": [ 00:06:59.608 "7f2184a9-9b1d-493d-a814-cc868a400fe9" 00:06:59.608 ], 00:06:59.608 "product_name": "Raid Volume", 00:06:59.608 "block_size": 512, 00:06:59.608 "num_blocks": 131072, 00:06:59.608 "uuid": "7f2184a9-9b1d-493d-a814-cc868a400fe9", 00:06:59.608 "assigned_rate_limits": { 00:06:59.608 "rw_ios_per_sec": 0, 00:06:59.608 "rw_mbytes_per_sec": 0, 00:06:59.608 "r_mbytes_per_sec": 0, 00:06:59.608 "w_mbytes_per_sec": 0 00:06:59.608 }, 00:06:59.608 "claimed": false, 00:06:59.608 "zoned": false, 00:06:59.608 "supported_io_types": { 00:06:59.608 "read": true, 00:06:59.608 "write": true, 00:06:59.608 "unmap": true, 00:06:59.608 "flush": true, 00:06:59.608 "reset": true, 00:06:59.608 "nvme_admin": false, 00:06:59.608 "nvme_io": false, 00:06:59.608 "nvme_io_md": false, 00:06:59.608 "write_zeroes": true, 00:06:59.608 "zcopy": false, 00:06:59.608 "get_zone_info": false, 00:06:59.608 "zone_management": false, 00:06:59.608 "zone_append": false, 00:06:59.608 "compare": false, 00:06:59.608 "compare_and_write": false, 00:06:59.608 "abort": false, 00:06:59.608 "seek_hole": false, 00:06:59.608 "seek_data": false, 00:06:59.608 "copy": false, 00:06:59.608 "nvme_iov_md": false 00:06:59.608 }, 00:06:59.608 "memory_domains": [ 00:06:59.608 { 00:06:59.608 "dma_device_id": "system", 00:06:59.608 "dma_device_type": 1 00:06:59.608 }, 00:06:59.608 { 00:06:59.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.608 "dma_device_type": 2 00:06:59.608 }, 00:06:59.608 { 00:06:59.608 "dma_device_id": "system", 00:06:59.608 "dma_device_type": 1 00:06:59.608 }, 00:06:59.608 { 00:06:59.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.608 "dma_device_type": 2 00:06:59.608 } 00:06:59.608 ], 00:06:59.608 "driver_specific": { 00:06:59.608 "raid": { 00:06:59.608 "uuid": "7f2184a9-9b1d-493d-a814-cc868a400fe9", 00:06:59.608 "strip_size_kb": 64, 00:06:59.608 "state": "online", 00:06:59.608 "raid_level": "raid0", 00:06:59.608 "superblock": false, 00:06:59.608 "num_base_bdevs": 2, 00:06:59.608 "num_base_bdevs_discovered": 2, 00:06:59.608 "num_base_bdevs_operational": 2, 00:06:59.608 "base_bdevs_list": [ 00:06:59.608 { 00:06:59.608 "name": "BaseBdev1", 00:06:59.608 "uuid": "a4424755-921c-4df6-8fad-84ab24a69aa5", 00:06:59.608 "is_configured": true, 00:06:59.608 "data_offset": 0, 00:06:59.608 "data_size": 65536 00:06:59.608 }, 00:06:59.608 { 00:06:59.608 "name": "BaseBdev2", 00:06:59.608 "uuid": "4312c071-4ec9-4209-82ca-b8d9e65626b3", 00:06:59.608 "is_configured": true, 00:06:59.608 "data_offset": 0, 00:06:59.608 "data_size": 65536 00:06:59.608 } 00:06:59.608 ] 00:06:59.608 } 00:06:59.608 } 00:06:59.608 }' 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:59.608 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:59.608 BaseBdev2' 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.609 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.868 [2024-11-26 21:14:17.788746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:59.868 [2024-11-26 21:14:17.788783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.868 [2024-11-26 21:14:17.788841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.868 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.869 "name": "Existed_Raid", 00:06:59.869 "uuid": "7f2184a9-9b1d-493d-a814-cc868a400fe9", 00:06:59.869 "strip_size_kb": 64, 00:06:59.869 "state": "offline", 00:06:59.869 "raid_level": "raid0", 00:06:59.869 "superblock": false, 00:06:59.869 "num_base_bdevs": 2, 00:06:59.869 "num_base_bdevs_discovered": 1, 00:06:59.869 "num_base_bdevs_operational": 1, 00:06:59.869 "base_bdevs_list": [ 00:06:59.869 { 00:06:59.869 "name": null, 00:06:59.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.869 "is_configured": false, 00:06:59.869 "data_offset": 0, 00:06:59.869 "data_size": 65536 00:06:59.869 }, 00:06:59.869 { 00:06:59.869 "name": "BaseBdev2", 00:06:59.869 "uuid": "4312c071-4ec9-4209-82ca-b8d9e65626b3", 00:06:59.869 "is_configured": true, 00:06:59.869 "data_offset": 0, 00:06:59.869 "data_size": 65536 00:06:59.869 } 00:06:59.869 ] 00:06:59.869 }' 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.869 21:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.439 [2024-11-26 21:14:18.345588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:00.439 [2024-11-26 21:14:18.345641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60575 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60575 ']' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60575 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60575 00:07:00.439 killing process with pid 60575 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60575' 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60575 00:07:00.439 21:14:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60575 00:07:00.439 [2024-11-26 21:14:18.528386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.439 [2024-11-26 21:14:18.544793] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.819 ************************************ 00:07:01.819 END TEST raid_state_function_test 00:07:01.819 ************************************ 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:01.819 00:07:01.819 real 0m4.934s 00:07:01.819 user 0m7.093s 00:07:01.819 sys 0m0.798s 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.819 21:14:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:01.819 21:14:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.819 21:14:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.819 21:14:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.819 ************************************ 00:07:01.819 START TEST raid_state_function_test_sb 00:07:01.819 ************************************ 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60828 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60828' 00:07:01.819 Process raid pid: 60828 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60828 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60828 ']' 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.819 21:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.819 [2024-11-26 21:14:19.819733] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:01.819 [2024-11-26 21:14:19.819933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.078 [2024-11-26 21:14:19.991555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.078 [2024-11-26 21:14:20.100714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.338 [2024-11-26 21:14:20.305647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.338 [2024-11-26 21:14:20.305764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.598 [2024-11-26 21:14:20.650568] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.598 [2024-11-26 21:14:20.650619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.598 [2024-11-26 21:14:20.650630] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.598 [2024-11-26 21:14:20.650640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.598 "name": "Existed_Raid", 00:07:02.598 "uuid": "77b48255-cd6b-4593-b96e-78b3826c3828", 00:07:02.598 "strip_size_kb": 64, 00:07:02.598 "state": "configuring", 00:07:02.598 "raid_level": "raid0", 00:07:02.598 "superblock": true, 00:07:02.598 "num_base_bdevs": 2, 00:07:02.598 "num_base_bdevs_discovered": 0, 00:07:02.598 "num_base_bdevs_operational": 2, 00:07:02.598 "base_bdevs_list": [ 00:07:02.598 { 00:07:02.598 "name": "BaseBdev1", 00:07:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.598 "is_configured": false, 00:07:02.598 "data_offset": 0, 00:07:02.598 "data_size": 0 00:07:02.598 }, 00:07:02.598 { 00:07:02.598 "name": "BaseBdev2", 00:07:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.598 "is_configured": false, 00:07:02.598 "data_offset": 0, 00:07:02.598 "data_size": 0 00:07:02.598 } 00:07:02.598 ] 00:07:02.598 }' 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.598 21:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.168 [2024-11-26 21:14:21.061820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.168 [2024-11-26 21:14:21.061857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.168 [2024-11-26 21:14:21.069800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.168 [2024-11-26 21:14:21.069882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.168 [2024-11-26 21:14:21.069911] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.168 [2024-11-26 21:14:21.069937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.168 [2024-11-26 21:14:21.113778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.168 BaseBdev1 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.168 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.169 [ 00:07:03.169 { 00:07:03.169 "name": "BaseBdev1", 00:07:03.169 "aliases": [ 00:07:03.169 "2629327e-10dc-4c65-9ba7-12572d580c20" 00:07:03.169 ], 00:07:03.169 "product_name": "Malloc disk", 00:07:03.169 "block_size": 512, 00:07:03.169 "num_blocks": 65536, 00:07:03.169 "uuid": "2629327e-10dc-4c65-9ba7-12572d580c20", 00:07:03.169 "assigned_rate_limits": { 00:07:03.169 "rw_ios_per_sec": 0, 00:07:03.169 "rw_mbytes_per_sec": 0, 00:07:03.169 "r_mbytes_per_sec": 0, 00:07:03.169 "w_mbytes_per_sec": 0 00:07:03.169 }, 00:07:03.169 "claimed": true, 00:07:03.169 "claim_type": "exclusive_write", 00:07:03.169 "zoned": false, 00:07:03.169 "supported_io_types": { 00:07:03.169 "read": true, 00:07:03.169 "write": true, 00:07:03.169 "unmap": true, 00:07:03.169 "flush": true, 00:07:03.169 "reset": true, 00:07:03.169 "nvme_admin": false, 00:07:03.169 "nvme_io": false, 00:07:03.169 "nvme_io_md": false, 00:07:03.169 "write_zeroes": true, 00:07:03.169 "zcopy": true, 00:07:03.169 "get_zone_info": false, 00:07:03.169 "zone_management": false, 00:07:03.169 "zone_append": false, 00:07:03.169 "compare": false, 00:07:03.169 "compare_and_write": false, 00:07:03.169 "abort": true, 00:07:03.169 "seek_hole": false, 00:07:03.169 "seek_data": false, 00:07:03.169 "copy": true, 00:07:03.169 "nvme_iov_md": false 00:07:03.169 }, 00:07:03.169 "memory_domains": [ 00:07:03.169 { 00:07:03.169 "dma_device_id": "system", 00:07:03.169 "dma_device_type": 1 00:07:03.169 }, 00:07:03.169 { 00:07:03.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.169 "dma_device_type": 2 00:07:03.169 } 00:07:03.169 ], 00:07:03.169 "driver_specific": {} 00:07:03.169 } 00:07:03.169 ] 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.169 "name": "Existed_Raid", 00:07:03.169 "uuid": "1ecdc4bd-628b-4157-a99d-97406bdf59f0", 00:07:03.169 "strip_size_kb": 64, 00:07:03.169 "state": "configuring", 00:07:03.169 "raid_level": "raid0", 00:07:03.169 "superblock": true, 00:07:03.169 "num_base_bdevs": 2, 00:07:03.169 "num_base_bdevs_discovered": 1, 00:07:03.169 "num_base_bdevs_operational": 2, 00:07:03.169 "base_bdevs_list": [ 00:07:03.169 { 00:07:03.169 "name": "BaseBdev1", 00:07:03.169 "uuid": "2629327e-10dc-4c65-9ba7-12572d580c20", 00:07:03.169 "is_configured": true, 00:07:03.169 "data_offset": 2048, 00:07:03.169 "data_size": 63488 00:07:03.169 }, 00:07:03.169 { 00:07:03.169 "name": "BaseBdev2", 00:07:03.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.169 "is_configured": false, 00:07:03.169 "data_offset": 0, 00:07:03.169 "data_size": 0 00:07:03.169 } 00:07:03.169 ] 00:07:03.169 }' 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.169 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.429 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.429 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.429 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.429 [2024-11-26 21:14:21.581076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.429 [2024-11-26 21:14:21.581210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.689 [2024-11-26 21:14:21.593086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.689 [2024-11-26 21:14:21.594920] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.689 [2024-11-26 21:14:21.595021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.689 "name": "Existed_Raid", 00:07:03.689 "uuid": "babd067f-82a9-4cfa-8853-29bb93a9a437", 00:07:03.689 "strip_size_kb": 64, 00:07:03.689 "state": "configuring", 00:07:03.689 "raid_level": "raid0", 00:07:03.689 "superblock": true, 00:07:03.689 "num_base_bdevs": 2, 00:07:03.689 "num_base_bdevs_discovered": 1, 00:07:03.689 "num_base_bdevs_operational": 2, 00:07:03.689 "base_bdevs_list": [ 00:07:03.689 { 00:07:03.689 "name": "BaseBdev1", 00:07:03.689 "uuid": "2629327e-10dc-4c65-9ba7-12572d580c20", 00:07:03.689 "is_configured": true, 00:07:03.689 "data_offset": 2048, 00:07:03.689 "data_size": 63488 00:07:03.689 }, 00:07:03.689 { 00:07:03.689 "name": "BaseBdev2", 00:07:03.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.689 "is_configured": false, 00:07:03.689 "data_offset": 0, 00:07:03.689 "data_size": 0 00:07:03.689 } 00:07:03.689 ] 00:07:03.689 }' 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.689 21:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.950 [2024-11-26 21:14:22.053494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.950 [2024-11-26 21:14:22.053751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.950 [2024-11-26 21:14:22.053767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.950 [2024-11-26 21:14:22.054051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.950 BaseBdev2 00:07:03.950 [2024-11-26 21:14:22.054210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.950 [2024-11-26 21:14:22.054224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:03.950 [2024-11-26 21:14:22.054371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.950 [ 00:07:03.950 { 00:07:03.950 "name": "BaseBdev2", 00:07:03.950 "aliases": [ 00:07:03.950 "5b36399c-c4e4-44fe-8a90-a5a24c8f0a1d" 00:07:03.950 ], 00:07:03.950 "product_name": "Malloc disk", 00:07:03.950 "block_size": 512, 00:07:03.950 "num_blocks": 65536, 00:07:03.950 "uuid": "5b36399c-c4e4-44fe-8a90-a5a24c8f0a1d", 00:07:03.950 "assigned_rate_limits": { 00:07:03.950 "rw_ios_per_sec": 0, 00:07:03.950 "rw_mbytes_per_sec": 0, 00:07:03.950 "r_mbytes_per_sec": 0, 00:07:03.950 "w_mbytes_per_sec": 0 00:07:03.950 }, 00:07:03.950 "claimed": true, 00:07:03.950 "claim_type": "exclusive_write", 00:07:03.950 "zoned": false, 00:07:03.950 "supported_io_types": { 00:07:03.950 "read": true, 00:07:03.950 "write": true, 00:07:03.950 "unmap": true, 00:07:03.950 "flush": true, 00:07:03.950 "reset": true, 00:07:03.950 "nvme_admin": false, 00:07:03.950 "nvme_io": false, 00:07:03.950 "nvme_io_md": false, 00:07:03.950 "write_zeroes": true, 00:07:03.950 "zcopy": true, 00:07:03.950 "get_zone_info": false, 00:07:03.950 "zone_management": false, 00:07:03.950 "zone_append": false, 00:07:03.950 "compare": false, 00:07:03.950 "compare_and_write": false, 00:07:03.950 "abort": true, 00:07:03.950 "seek_hole": false, 00:07:03.950 "seek_data": false, 00:07:03.950 "copy": true, 00:07:03.950 "nvme_iov_md": false 00:07:03.950 }, 00:07:03.950 "memory_domains": [ 00:07:03.950 { 00:07:03.950 "dma_device_id": "system", 00:07:03.950 "dma_device_type": 1 00:07:03.950 }, 00:07:03.950 { 00:07:03.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.950 "dma_device_type": 2 00:07:03.950 } 00:07:03.950 ], 00:07:03.950 "driver_specific": {} 00:07:03.950 } 00:07:03.950 ] 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.950 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.210 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.210 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.210 "name": "Existed_Raid", 00:07:04.210 "uuid": "babd067f-82a9-4cfa-8853-29bb93a9a437", 00:07:04.210 "strip_size_kb": 64, 00:07:04.210 "state": "online", 00:07:04.210 "raid_level": "raid0", 00:07:04.210 "superblock": true, 00:07:04.210 "num_base_bdevs": 2, 00:07:04.210 "num_base_bdevs_discovered": 2, 00:07:04.210 "num_base_bdevs_operational": 2, 00:07:04.210 "base_bdevs_list": [ 00:07:04.210 { 00:07:04.210 "name": "BaseBdev1", 00:07:04.210 "uuid": "2629327e-10dc-4c65-9ba7-12572d580c20", 00:07:04.210 "is_configured": true, 00:07:04.210 "data_offset": 2048, 00:07:04.210 "data_size": 63488 00:07:04.210 }, 00:07:04.210 { 00:07:04.210 "name": "BaseBdev2", 00:07:04.210 "uuid": "5b36399c-c4e4-44fe-8a90-a5a24c8f0a1d", 00:07:04.210 "is_configured": true, 00:07:04.210 "data_offset": 2048, 00:07:04.210 "data_size": 63488 00:07:04.210 } 00:07:04.210 ] 00:07:04.210 }' 00:07:04.210 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.210 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.470 [2024-11-26 21:14:22.596981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.470 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.730 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.730 "name": "Existed_Raid", 00:07:04.730 "aliases": [ 00:07:04.730 "babd067f-82a9-4cfa-8853-29bb93a9a437" 00:07:04.730 ], 00:07:04.730 "product_name": "Raid Volume", 00:07:04.730 "block_size": 512, 00:07:04.730 "num_blocks": 126976, 00:07:04.730 "uuid": "babd067f-82a9-4cfa-8853-29bb93a9a437", 00:07:04.730 "assigned_rate_limits": { 00:07:04.730 "rw_ios_per_sec": 0, 00:07:04.730 "rw_mbytes_per_sec": 0, 00:07:04.730 "r_mbytes_per_sec": 0, 00:07:04.730 "w_mbytes_per_sec": 0 00:07:04.730 }, 00:07:04.730 "claimed": false, 00:07:04.730 "zoned": false, 00:07:04.730 "supported_io_types": { 00:07:04.730 "read": true, 00:07:04.730 "write": true, 00:07:04.730 "unmap": true, 00:07:04.730 "flush": true, 00:07:04.730 "reset": true, 00:07:04.730 "nvme_admin": false, 00:07:04.730 "nvme_io": false, 00:07:04.730 "nvme_io_md": false, 00:07:04.730 "write_zeroes": true, 00:07:04.730 "zcopy": false, 00:07:04.730 "get_zone_info": false, 00:07:04.730 "zone_management": false, 00:07:04.730 "zone_append": false, 00:07:04.730 "compare": false, 00:07:04.730 "compare_and_write": false, 00:07:04.730 "abort": false, 00:07:04.730 "seek_hole": false, 00:07:04.730 "seek_data": false, 00:07:04.730 "copy": false, 00:07:04.730 "nvme_iov_md": false 00:07:04.730 }, 00:07:04.730 "memory_domains": [ 00:07:04.730 { 00:07:04.731 "dma_device_id": "system", 00:07:04.731 "dma_device_type": 1 00:07:04.731 }, 00:07:04.731 { 00:07:04.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.731 "dma_device_type": 2 00:07:04.731 }, 00:07:04.731 { 00:07:04.731 "dma_device_id": "system", 00:07:04.731 "dma_device_type": 1 00:07:04.731 }, 00:07:04.731 { 00:07:04.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.731 "dma_device_type": 2 00:07:04.731 } 00:07:04.731 ], 00:07:04.731 "driver_specific": { 00:07:04.731 "raid": { 00:07:04.731 "uuid": "babd067f-82a9-4cfa-8853-29bb93a9a437", 00:07:04.731 "strip_size_kb": 64, 00:07:04.731 "state": "online", 00:07:04.731 "raid_level": "raid0", 00:07:04.731 "superblock": true, 00:07:04.731 "num_base_bdevs": 2, 00:07:04.731 "num_base_bdevs_discovered": 2, 00:07:04.731 "num_base_bdevs_operational": 2, 00:07:04.731 "base_bdevs_list": [ 00:07:04.731 { 00:07:04.731 "name": "BaseBdev1", 00:07:04.731 "uuid": "2629327e-10dc-4c65-9ba7-12572d580c20", 00:07:04.731 "is_configured": true, 00:07:04.731 "data_offset": 2048, 00:07:04.731 "data_size": 63488 00:07:04.731 }, 00:07:04.731 { 00:07:04.731 "name": "BaseBdev2", 00:07:04.731 "uuid": "5b36399c-c4e4-44fe-8a90-a5a24c8f0a1d", 00:07:04.731 "is_configured": true, 00:07:04.731 "data_offset": 2048, 00:07:04.731 "data_size": 63488 00:07:04.731 } 00:07:04.731 ] 00:07:04.731 } 00:07:04.731 } 00:07:04.731 }' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:04.731 BaseBdev2' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.731 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.731 [2024-11-26 21:14:22.836295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:04.731 [2024-11-26 21:14:22.836378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.731 [2024-11-26 21:14:22.836471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.992 "name": "Existed_Raid", 00:07:04.992 "uuid": "babd067f-82a9-4cfa-8853-29bb93a9a437", 00:07:04.992 "strip_size_kb": 64, 00:07:04.992 "state": "offline", 00:07:04.992 "raid_level": "raid0", 00:07:04.992 "superblock": true, 00:07:04.992 "num_base_bdevs": 2, 00:07:04.992 "num_base_bdevs_discovered": 1, 00:07:04.992 "num_base_bdevs_operational": 1, 00:07:04.992 "base_bdevs_list": [ 00:07:04.992 { 00:07:04.992 "name": null, 00:07:04.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.992 "is_configured": false, 00:07:04.992 "data_offset": 0, 00:07:04.992 "data_size": 63488 00:07:04.992 }, 00:07:04.992 { 00:07:04.992 "name": "BaseBdev2", 00:07:04.992 "uuid": "5b36399c-c4e4-44fe-8a90-a5a24c8f0a1d", 00:07:04.992 "is_configured": true, 00:07:04.992 "data_offset": 2048, 00:07:04.992 "data_size": 63488 00:07:04.992 } 00:07:04.992 ] 00:07:04.992 }' 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.992 21:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.252 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 [2024-11-26 21:14:23.378043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.252 [2024-11-26 21:14:23.378099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60828 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60828 ']' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60828 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60828 00:07:05.513 killing process with pid 60828 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60828' 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60828 00:07:05.513 [2024-11-26 21:14:23.551749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.513 21:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60828 00:07:05.513 [2024-11-26 21:14:23.568320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.506 21:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:06.506 00:07:06.506 real 0m4.919s 00:07:06.506 user 0m7.153s 00:07:06.506 sys 0m0.737s 00:07:06.506 21:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.506 21:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.506 ************************************ 00:07:06.506 END TEST raid_state_function_test_sb 00:07:06.506 ************************************ 00:07:06.766 21:14:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:06.766 21:14:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:06.766 21:14:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.766 21:14:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.766 ************************************ 00:07:06.766 START TEST raid_superblock_test 00:07:06.766 ************************************ 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61075 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61075 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61075 ']' 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.766 21:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.766 [2024-11-26 21:14:24.799311] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:06.766 [2024-11-26 21:14:24.799421] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:07:07.025 [2024-11-26 21:14:24.970376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.025 [2024-11-26 21:14:25.079950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.283 [2024-11-26 21:14:25.274480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.283 [2024-11-26 21:14:25.274543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 malloc1 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 [2024-11-26 21:14:25.669630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.543 [2024-11-26 21:14:25.669698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.543 [2024-11-26 21:14:25.669723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.543 [2024-11-26 21:14:25.669734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.543 [2024-11-26 21:14:25.672201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.543 [2024-11-26 21:14:25.672240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.543 pt1 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.543 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 malloc2 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 [2024-11-26 21:14:25.721669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:07.805 [2024-11-26 21:14:25.721726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.805 [2024-11-26 21:14:25.721750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:07.805 [2024-11-26 21:14:25.721759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.805 [2024-11-26 21:14:25.723851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.805 [2024-11-26 21:14:25.723955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:07.805 pt2 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 [2024-11-26 21:14:25.733693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.805 [2024-11-26 21:14:25.735485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:07.805 [2024-11-26 21:14:25.735668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.805 [2024-11-26 21:14:25.735681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.805 [2024-11-26 21:14:25.735912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.805 [2024-11-26 21:14:25.736103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.805 [2024-11-26 21:14:25.736115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:07.805 [2024-11-26 21:14:25.736274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.805 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.805 "name": "raid_bdev1", 00:07:07.805 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:07.805 "strip_size_kb": 64, 00:07:07.805 "state": "online", 00:07:07.805 "raid_level": "raid0", 00:07:07.805 "superblock": true, 00:07:07.805 "num_base_bdevs": 2, 00:07:07.805 "num_base_bdevs_discovered": 2, 00:07:07.805 "num_base_bdevs_operational": 2, 00:07:07.805 "base_bdevs_list": [ 00:07:07.805 { 00:07:07.805 "name": "pt1", 00:07:07.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.805 "is_configured": true, 00:07:07.805 "data_offset": 2048, 00:07:07.805 "data_size": 63488 00:07:07.805 }, 00:07:07.805 { 00:07:07.805 "name": "pt2", 00:07:07.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.805 "is_configured": true, 00:07:07.805 "data_offset": 2048, 00:07:07.805 "data_size": 63488 00:07:07.805 } 00:07:07.805 ] 00:07:07.805 }' 00:07:07.806 21:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.806 21:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.065 [2024-11-26 21:14:26.185166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.065 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.324 "name": "raid_bdev1", 00:07:08.324 "aliases": [ 00:07:08.324 "0bad079a-742a-46f8-8e2b-3ec017bc441d" 00:07:08.324 ], 00:07:08.324 "product_name": "Raid Volume", 00:07:08.324 "block_size": 512, 00:07:08.324 "num_blocks": 126976, 00:07:08.324 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:08.324 "assigned_rate_limits": { 00:07:08.324 "rw_ios_per_sec": 0, 00:07:08.324 "rw_mbytes_per_sec": 0, 00:07:08.324 "r_mbytes_per_sec": 0, 00:07:08.324 "w_mbytes_per_sec": 0 00:07:08.324 }, 00:07:08.324 "claimed": false, 00:07:08.324 "zoned": false, 00:07:08.324 "supported_io_types": { 00:07:08.324 "read": true, 00:07:08.324 "write": true, 00:07:08.324 "unmap": true, 00:07:08.324 "flush": true, 00:07:08.324 "reset": true, 00:07:08.324 "nvme_admin": false, 00:07:08.324 "nvme_io": false, 00:07:08.324 "nvme_io_md": false, 00:07:08.324 "write_zeroes": true, 00:07:08.324 "zcopy": false, 00:07:08.324 "get_zone_info": false, 00:07:08.324 "zone_management": false, 00:07:08.324 "zone_append": false, 00:07:08.324 "compare": false, 00:07:08.324 "compare_and_write": false, 00:07:08.324 "abort": false, 00:07:08.324 "seek_hole": false, 00:07:08.324 "seek_data": false, 00:07:08.324 "copy": false, 00:07:08.324 "nvme_iov_md": false 00:07:08.324 }, 00:07:08.324 "memory_domains": [ 00:07:08.324 { 00:07:08.324 "dma_device_id": "system", 00:07:08.324 "dma_device_type": 1 00:07:08.324 }, 00:07:08.324 { 00:07:08.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.324 "dma_device_type": 2 00:07:08.324 }, 00:07:08.324 { 00:07:08.324 "dma_device_id": "system", 00:07:08.324 "dma_device_type": 1 00:07:08.324 }, 00:07:08.324 { 00:07:08.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.324 "dma_device_type": 2 00:07:08.324 } 00:07:08.324 ], 00:07:08.324 "driver_specific": { 00:07:08.324 "raid": { 00:07:08.324 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:08.324 "strip_size_kb": 64, 00:07:08.324 "state": "online", 00:07:08.324 "raid_level": "raid0", 00:07:08.324 "superblock": true, 00:07:08.324 "num_base_bdevs": 2, 00:07:08.324 "num_base_bdevs_discovered": 2, 00:07:08.324 "num_base_bdevs_operational": 2, 00:07:08.324 "base_bdevs_list": [ 00:07:08.324 { 00:07:08.324 "name": "pt1", 00:07:08.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.324 "is_configured": true, 00:07:08.324 "data_offset": 2048, 00:07:08.324 "data_size": 63488 00:07:08.324 }, 00:07:08.324 { 00:07:08.324 "name": "pt2", 00:07:08.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.324 "is_configured": true, 00:07:08.324 "data_offset": 2048, 00:07:08.324 "data_size": 63488 00:07:08.324 } 00:07:08.324 ] 00:07:08.324 } 00:07:08.324 } 00:07:08.324 }' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.324 pt2' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.324 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.324 [2024-11-26 21:14:26.424716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0bad079a-742a-46f8-8e2b-3ec017bc441d 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0bad079a-742a-46f8-8e2b-3ec017bc441d ']' 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 [2024-11-26 21:14:26.468376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.325 [2024-11-26 21:14:26.468400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.325 [2024-11-26 21:14:26.468481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.325 [2024-11-26 21:14:26.468526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.325 [2024-11-26 21:14:26.468537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.585 [2024-11-26 21:14:26.588226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:08.585 [2024-11-26 21:14:26.590077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:08.585 [2024-11-26 21:14:26.590211] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:08.585 [2024-11-26 21:14:26.590274] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:08.585 [2024-11-26 21:14:26.590293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.585 [2024-11-26 21:14:26.590307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:08.585 request: 00:07:08.585 { 00:07:08.585 "name": "raid_bdev1", 00:07:08.585 "raid_level": "raid0", 00:07:08.585 "base_bdevs": [ 00:07:08.585 "malloc1", 00:07:08.585 "malloc2" 00:07:08.585 ], 00:07:08.585 "strip_size_kb": 64, 00:07:08.585 "superblock": false, 00:07:08.585 "method": "bdev_raid_create", 00:07:08.585 "req_id": 1 00:07:08.585 } 00:07:08.585 Got JSON-RPC error response 00:07:08.585 response: 00:07:08.585 { 00:07:08.585 "code": -17, 00:07:08.585 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:08.585 } 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.585 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.585 [2024-11-26 21:14:26.640078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:08.585 [2024-11-26 21:14:26.640171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.585 [2024-11-26 21:14:26.640204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:08.585 [2024-11-26 21:14:26.640234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.585 [2024-11-26 21:14:26.642408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.585 [2024-11-26 21:14:26.642475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:08.585 [2024-11-26 21:14:26.642594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:08.586 [2024-11-26 21:14:26.642674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:08.586 pt1 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.586 "name": "raid_bdev1", 00:07:08.586 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:08.586 "strip_size_kb": 64, 00:07:08.586 "state": "configuring", 00:07:08.586 "raid_level": "raid0", 00:07:08.586 "superblock": true, 00:07:08.586 "num_base_bdevs": 2, 00:07:08.586 "num_base_bdevs_discovered": 1, 00:07:08.586 "num_base_bdevs_operational": 2, 00:07:08.586 "base_bdevs_list": [ 00:07:08.586 { 00:07:08.586 "name": "pt1", 00:07:08.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.586 "is_configured": true, 00:07:08.586 "data_offset": 2048, 00:07:08.586 "data_size": 63488 00:07:08.586 }, 00:07:08.586 { 00:07:08.586 "name": null, 00:07:08.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.586 "is_configured": false, 00:07:08.586 "data_offset": 2048, 00:07:08.586 "data_size": 63488 00:07:08.586 } 00:07:08.586 ] 00:07:08.586 }' 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.586 21:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.154 [2024-11-26 21:14:27.099384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.154 [2024-11-26 21:14:27.099523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.154 [2024-11-26 21:14:27.099565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:09.154 [2024-11-26 21:14:27.099596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.154 [2024-11-26 21:14:27.100099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.154 [2024-11-26 21:14:27.100168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.154 [2024-11-26 21:14:27.100278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:09.154 [2024-11-26 21:14:27.100336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.154 [2024-11-26 21:14:27.100477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.154 [2024-11-26 21:14:27.100518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.154 [2024-11-26 21:14:27.100777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:09.154 [2024-11-26 21:14:27.100976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.154 [2024-11-26 21:14:27.101016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:09.154 [2024-11-26 21:14:27.101223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.154 pt2 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.154 "name": "raid_bdev1", 00:07:09.154 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:09.154 "strip_size_kb": 64, 00:07:09.154 "state": "online", 00:07:09.154 "raid_level": "raid0", 00:07:09.154 "superblock": true, 00:07:09.154 "num_base_bdevs": 2, 00:07:09.154 "num_base_bdevs_discovered": 2, 00:07:09.154 "num_base_bdevs_operational": 2, 00:07:09.154 "base_bdevs_list": [ 00:07:09.154 { 00:07:09.154 "name": "pt1", 00:07:09.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.154 "is_configured": true, 00:07:09.154 "data_offset": 2048, 00:07:09.154 "data_size": 63488 00:07:09.154 }, 00:07:09.154 { 00:07:09.154 "name": "pt2", 00:07:09.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.154 "is_configured": true, 00:07:09.154 "data_offset": 2048, 00:07:09.154 "data_size": 63488 00:07:09.154 } 00:07:09.154 ] 00:07:09.154 }' 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.154 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.413 [2024-11-26 21:14:27.546916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.413 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.671 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.671 "name": "raid_bdev1", 00:07:09.671 "aliases": [ 00:07:09.671 "0bad079a-742a-46f8-8e2b-3ec017bc441d" 00:07:09.671 ], 00:07:09.671 "product_name": "Raid Volume", 00:07:09.671 "block_size": 512, 00:07:09.671 "num_blocks": 126976, 00:07:09.671 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:09.671 "assigned_rate_limits": { 00:07:09.671 "rw_ios_per_sec": 0, 00:07:09.671 "rw_mbytes_per_sec": 0, 00:07:09.671 "r_mbytes_per_sec": 0, 00:07:09.671 "w_mbytes_per_sec": 0 00:07:09.671 }, 00:07:09.671 "claimed": false, 00:07:09.671 "zoned": false, 00:07:09.671 "supported_io_types": { 00:07:09.671 "read": true, 00:07:09.671 "write": true, 00:07:09.671 "unmap": true, 00:07:09.671 "flush": true, 00:07:09.671 "reset": true, 00:07:09.671 "nvme_admin": false, 00:07:09.671 "nvme_io": false, 00:07:09.671 "nvme_io_md": false, 00:07:09.671 "write_zeroes": true, 00:07:09.671 "zcopy": false, 00:07:09.671 "get_zone_info": false, 00:07:09.671 "zone_management": false, 00:07:09.671 "zone_append": false, 00:07:09.671 "compare": false, 00:07:09.671 "compare_and_write": false, 00:07:09.671 "abort": false, 00:07:09.671 "seek_hole": false, 00:07:09.671 "seek_data": false, 00:07:09.671 "copy": false, 00:07:09.671 "nvme_iov_md": false 00:07:09.672 }, 00:07:09.672 "memory_domains": [ 00:07:09.672 { 00:07:09.672 "dma_device_id": "system", 00:07:09.672 "dma_device_type": 1 00:07:09.672 }, 00:07:09.672 { 00:07:09.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.672 "dma_device_type": 2 00:07:09.672 }, 00:07:09.672 { 00:07:09.672 "dma_device_id": "system", 00:07:09.672 "dma_device_type": 1 00:07:09.672 }, 00:07:09.672 { 00:07:09.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.672 "dma_device_type": 2 00:07:09.672 } 00:07:09.672 ], 00:07:09.672 "driver_specific": { 00:07:09.672 "raid": { 00:07:09.672 "uuid": "0bad079a-742a-46f8-8e2b-3ec017bc441d", 00:07:09.672 "strip_size_kb": 64, 00:07:09.672 "state": "online", 00:07:09.672 "raid_level": "raid0", 00:07:09.672 "superblock": true, 00:07:09.672 "num_base_bdevs": 2, 00:07:09.672 "num_base_bdevs_discovered": 2, 00:07:09.672 "num_base_bdevs_operational": 2, 00:07:09.672 "base_bdevs_list": [ 00:07:09.672 { 00:07:09.672 "name": "pt1", 00:07:09.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.672 "is_configured": true, 00:07:09.672 "data_offset": 2048, 00:07:09.672 "data_size": 63488 00:07:09.672 }, 00:07:09.672 { 00:07:09.672 "name": "pt2", 00:07:09.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.672 "is_configured": true, 00:07:09.672 "data_offset": 2048, 00:07:09.672 "data_size": 63488 00:07:09.672 } 00:07:09.672 ] 00:07:09.672 } 00:07:09.672 } 00:07:09.672 }' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.672 pt2' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.672 [2024-11-26 21:14:27.746518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0bad079a-742a-46f8-8e2b-3ec017bc441d '!=' 0bad079a-742a-46f8-8e2b-3ec017bc441d ']' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61075 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61075 ']' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61075 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61075 00:07:09.672 killing process with pid 61075 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61075' 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61075 00:07:09.672 [2024-11-26 21:14:27.814380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.672 [2024-11-26 21:14:27.814477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.672 [2024-11-26 21:14:27.814525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.672 [2024-11-26 21:14:27.814537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:09.672 21:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61075 00:07:09.930 [2024-11-26 21:14:28.011825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.305 ************************************ 00:07:11.305 21:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:11.305 00:07:11.305 real 0m4.383s 00:07:11.305 user 0m6.194s 00:07:11.305 sys 0m0.683s 00:07:11.305 21:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.305 21:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.305 END TEST raid_superblock_test 00:07:11.305 ************************************ 00:07:11.305 21:14:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:11.305 21:14:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.305 21:14:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.305 21:14:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.305 ************************************ 00:07:11.305 START TEST raid_read_error_test 00:07:11.305 ************************************ 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.305 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tB3i3FY3AY 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61285 00:07:11.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61285 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61285 ']' 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.306 21:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:11.306 [2024-11-26 21:14:29.256781] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:11.306 [2024-11-26 21:14:29.256914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:07:11.306 [2024-11-26 21:14:29.428350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.564 [2024-11-26 21:14:29.538790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.822 [2024-11-26 21:14:29.736032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.822 [2024-11-26 21:14:29.736077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 BaseBdev1_malloc 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 true 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 [2024-11-26 21:14:30.128101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:12.081 [2024-11-26 21:14:30.128162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.081 [2024-11-26 21:14:30.128183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:12.081 [2024-11-26 21:14:30.128194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.081 [2024-11-26 21:14:30.130240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.081 [2024-11-26 21:14:30.130276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:12.081 BaseBdev1 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 BaseBdev2_malloc 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 true 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 [2024-11-26 21:14:30.182548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:12.081 [2024-11-26 21:14:30.182607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.081 [2024-11-26 21:14:30.182638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.081 [2024-11-26 21:14:30.182648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.081 [2024-11-26 21:14:30.184714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.081 [2024-11-26 21:14:30.184817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:12.081 BaseBdev2 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.081 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.081 [2024-11-26 21:14:30.190590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.081 [2024-11-26 21:14:30.192381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.081 [2024-11-26 21:14:30.192610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.081 [2024-11-26 21:14:30.192665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.081 [2024-11-26 21:14:30.192919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:12.081 [2024-11-26 21:14:30.193124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.081 [2024-11-26 21:14:30.193171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.082 [2024-11-26 21:14:30.193351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.082 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.340 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.340 "name": "raid_bdev1", 00:07:12.340 "uuid": "51fd6479-588a-4700-9868-6d06841156c0", 00:07:12.340 "strip_size_kb": 64, 00:07:12.340 "state": "online", 00:07:12.340 "raid_level": "raid0", 00:07:12.340 "superblock": true, 00:07:12.340 "num_base_bdevs": 2, 00:07:12.340 "num_base_bdevs_discovered": 2, 00:07:12.340 "num_base_bdevs_operational": 2, 00:07:12.340 "base_bdevs_list": [ 00:07:12.340 { 00:07:12.340 "name": "BaseBdev1", 00:07:12.340 "uuid": "743874bb-4185-5a29-9890-115f63167939", 00:07:12.340 "is_configured": true, 00:07:12.340 "data_offset": 2048, 00:07:12.340 "data_size": 63488 00:07:12.340 }, 00:07:12.340 { 00:07:12.340 "name": "BaseBdev2", 00:07:12.340 "uuid": "2274758b-eaad-5ad3-88ec-a3a25c81b80a", 00:07:12.340 "is_configured": true, 00:07:12.340 "data_offset": 2048, 00:07:12.340 "data_size": 63488 00:07:12.340 } 00:07:12.340 ] 00:07:12.340 }' 00:07:12.340 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.340 21:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.598 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:12.598 21:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:12.598 [2024-11-26 21:14:30.750988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.535 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.794 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.794 "name": "raid_bdev1", 00:07:13.794 "uuid": "51fd6479-588a-4700-9868-6d06841156c0", 00:07:13.794 "strip_size_kb": 64, 00:07:13.794 "state": "online", 00:07:13.794 "raid_level": "raid0", 00:07:13.794 "superblock": true, 00:07:13.794 "num_base_bdevs": 2, 00:07:13.794 "num_base_bdevs_discovered": 2, 00:07:13.794 "num_base_bdevs_operational": 2, 00:07:13.794 "base_bdevs_list": [ 00:07:13.794 { 00:07:13.794 "name": "BaseBdev1", 00:07:13.794 "uuid": "743874bb-4185-5a29-9890-115f63167939", 00:07:13.794 "is_configured": true, 00:07:13.794 "data_offset": 2048, 00:07:13.794 "data_size": 63488 00:07:13.794 }, 00:07:13.794 { 00:07:13.794 "name": "BaseBdev2", 00:07:13.794 "uuid": "2274758b-eaad-5ad3-88ec-a3a25c81b80a", 00:07:13.794 "is_configured": true, 00:07:13.794 "data_offset": 2048, 00:07:13.794 "data_size": 63488 00:07:13.794 } 00:07:13.794 ] 00:07:13.794 }' 00:07:13.794 21:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.794 21:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.056 [2024-11-26 21:14:32.053011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.056 [2024-11-26 21:14:32.053101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.056 [2024-11-26 21:14:32.055760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.056 [2024-11-26 21:14:32.055844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.056 [2024-11-26 21:14:32.055892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.056 [2024-11-26 21:14:32.055932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61285 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61285 ']' 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61285 00:07:14.056 { 00:07:14.056 "results": [ 00:07:14.056 { 00:07:14.056 "job": "raid_bdev1", 00:07:14.056 "core_mask": "0x1", 00:07:14.056 "workload": "randrw", 00:07:14.056 "percentage": 50, 00:07:14.056 "status": "finished", 00:07:14.056 "queue_depth": 1, 00:07:14.056 "io_size": 131072, 00:07:14.056 "runtime": 1.302839, 00:07:14.056 "iops": 16337.398558072025, 00:07:14.056 "mibps": 2042.1748197590032, 00:07:14.056 "io_failed": 1, 00:07:14.056 "io_timeout": 0, 00:07:14.056 "avg_latency_us": 84.46986292320803, 00:07:14.056 "min_latency_us": 26.270742358078603, 00:07:14.056 "max_latency_us": 1359.3711790393013 00:07:14.056 } 00:07:14.056 ], 00:07:14.056 "core_count": 1 00:07:14.056 } 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61285 00:07:14.056 killing process with pid 61285 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61285' 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61285 00:07:14.056 21:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61285 00:07:14.056 [2024-11-26 21:14:32.096916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.323 [2024-11-26 21:14:32.229738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tB3i3FY3AY 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:07:15.267 00:07:15.267 real 0m4.226s 00:07:15.267 user 0m5.052s 00:07:15.267 sys 0m0.508s 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.267 21:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.267 ************************************ 00:07:15.267 END TEST raid_read_error_test 00:07:15.267 ************************************ 00:07:15.526 21:14:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:15.526 21:14:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:15.526 21:14:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.526 21:14:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.526 ************************************ 00:07:15.526 START TEST raid_write_error_test 00:07:15.526 ************************************ 00:07:15.526 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:15.526 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:15.526 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:15.526 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:15.526 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZDMuhF15El 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61425 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61425 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61425 ']' 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.527 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.527 [2024-11-26 21:14:33.548379] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:15.527 [2024-11-26 21:14:33.548567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61425 ] 00:07:15.785 [2024-11-26 21:14:33.718477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.785 [2024-11-26 21:14:33.830391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.044 [2024-11-26 21:14:34.022352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.044 [2024-11-26 21:14:34.022418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 BaseBdev1_malloc 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 true 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 [2024-11-26 21:14:34.423171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:16.305 [2024-11-26 21:14:34.423284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.305 [2024-11-26 21:14:34.423307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:16.305 [2024-11-26 21:14:34.423318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.305 [2024-11-26 21:14:34.425364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.305 [2024-11-26 21:14:34.425414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:16.305 BaseBdev1 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.305 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 BaseBdev2_malloc 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 true 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 [2024-11-26 21:14:34.488877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:16.564 [2024-11-26 21:14:34.488929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.564 [2024-11-26 21:14:34.488961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:16.564 [2024-11-26 21:14:34.488971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.564 [2024-11-26 21:14:34.490951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.564 [2024-11-26 21:14:34.490997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:16.564 BaseBdev2 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 [2024-11-26 21:14:34.500913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.564 [2024-11-26 21:14:34.502637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.564 [2024-11-26 21:14:34.502812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:16.564 [2024-11-26 21:14:34.502828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.564 [2024-11-26 21:14:34.503053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:16.564 [2024-11-26 21:14:34.503205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:16.564 [2024-11-26 21:14:34.503219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:16.564 [2024-11-26 21:14:34.503369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.564 "name": "raid_bdev1", 00:07:16.564 "uuid": "106dc848-b723-4557-9c24-348a1d9f3570", 00:07:16.564 "strip_size_kb": 64, 00:07:16.564 "state": "online", 00:07:16.564 "raid_level": "raid0", 00:07:16.564 "superblock": true, 00:07:16.564 "num_base_bdevs": 2, 00:07:16.564 "num_base_bdevs_discovered": 2, 00:07:16.564 "num_base_bdevs_operational": 2, 00:07:16.564 "base_bdevs_list": [ 00:07:16.564 { 00:07:16.564 "name": "BaseBdev1", 00:07:16.564 "uuid": "4c8bd319-82bd-5437-8095-eaaee7c2d981", 00:07:16.564 "is_configured": true, 00:07:16.564 "data_offset": 2048, 00:07:16.564 "data_size": 63488 00:07:16.564 }, 00:07:16.564 { 00:07:16.564 "name": "BaseBdev2", 00:07:16.564 "uuid": "3ccdfc0d-3faf-5771-9366-68886efdfd8b", 00:07:16.564 "is_configured": true, 00:07:16.564 "data_offset": 2048, 00:07:16.564 "data_size": 63488 00:07:16.564 } 00:07:16.564 ] 00:07:16.564 }' 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.564 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.132 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:17.132 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:17.132 [2024-11-26 21:14:35.077301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.071 21:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.071 "name": "raid_bdev1", 00:07:18.071 "uuid": "106dc848-b723-4557-9c24-348a1d9f3570", 00:07:18.071 "strip_size_kb": 64, 00:07:18.071 "state": "online", 00:07:18.071 "raid_level": "raid0", 00:07:18.071 "superblock": true, 00:07:18.071 "num_base_bdevs": 2, 00:07:18.071 "num_base_bdevs_discovered": 2, 00:07:18.071 "num_base_bdevs_operational": 2, 00:07:18.071 "base_bdevs_list": [ 00:07:18.071 { 00:07:18.071 "name": "BaseBdev1", 00:07:18.071 "uuid": "4c8bd319-82bd-5437-8095-eaaee7c2d981", 00:07:18.071 "is_configured": true, 00:07:18.071 "data_offset": 2048, 00:07:18.071 "data_size": 63488 00:07:18.071 }, 00:07:18.071 { 00:07:18.071 "name": "BaseBdev2", 00:07:18.071 "uuid": "3ccdfc0d-3faf-5771-9366-68886efdfd8b", 00:07:18.071 "is_configured": true, 00:07:18.071 "data_offset": 2048, 00:07:18.071 "data_size": 63488 00:07:18.071 } 00:07:18.071 ] 00:07:18.071 }' 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.071 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.330 [2024-11-26 21:14:36.412850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.330 [2024-11-26 21:14:36.412978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.330 [2024-11-26 21:14:36.415777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.330 [2024-11-26 21:14:36.415863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.330 [2024-11-26 21:14:36.415915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.330 [2024-11-26 21:14:36.415981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:18.330 { 00:07:18.330 "results": [ 00:07:18.330 { 00:07:18.330 "job": "raid_bdev1", 00:07:18.330 "core_mask": "0x1", 00:07:18.330 "workload": "randrw", 00:07:18.330 "percentage": 50, 00:07:18.330 "status": "finished", 00:07:18.330 "queue_depth": 1, 00:07:18.330 "io_size": 131072, 00:07:18.330 "runtime": 1.336543, 00:07:18.330 "iops": 16315.225174199408, 00:07:18.330 "mibps": 2039.403146774926, 00:07:18.330 "io_failed": 1, 00:07:18.330 "io_timeout": 0, 00:07:18.330 "avg_latency_us": 84.65678794297652, 00:07:18.330 "min_latency_us": 25.9353711790393, 00:07:18.330 "max_latency_us": 1359.3711790393013 00:07:18.330 } 00:07:18.330 ], 00:07:18.330 "core_count": 1 00:07:18.330 } 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61425 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61425 ']' 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61425 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61425 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61425' 00:07:18.330 killing process with pid 61425 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61425 00:07:18.330 [2024-11-26 21:14:36.462787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.330 21:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61425 00:07:18.589 [2024-11-26 21:14:36.594834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZDMuhF15El 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:19.966 00:07:19.966 real 0m4.283s 00:07:19.966 user 0m5.128s 00:07:19.966 sys 0m0.540s 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.966 ************************************ 00:07:19.966 END TEST raid_write_error_test 00:07:19.966 ************************************ 00:07:19.966 21:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.966 21:14:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:19.966 21:14:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:19.966 21:14:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.966 21:14:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.966 21:14:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.966 ************************************ 00:07:19.966 START TEST raid_state_function_test 00:07:19.966 ************************************ 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:19.966 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61564 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61564' 00:07:19.967 Process raid pid: 61564 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61564 00:07:19.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61564 ']' 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.967 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.967 [2024-11-26 21:14:37.889083] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:19.967 [2024-11-26 21:14:37.889264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.967 [2024-11-26 21:14:38.050318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.226 [2024-11-26 21:14:38.158720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.226 [2024-11-26 21:14:38.353368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.226 [2024-11-26 21:14:38.353485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.796 [2024-11-26 21:14:38.710011] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.796 [2024-11-26 21:14:38.710123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.796 [2024-11-26 21:14:38.710154] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.796 [2024-11-26 21:14:38.710178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.796 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.797 "name": "Existed_Raid", 00:07:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.797 "strip_size_kb": 64, 00:07:20.797 "state": "configuring", 00:07:20.797 "raid_level": "concat", 00:07:20.797 "superblock": false, 00:07:20.797 "num_base_bdevs": 2, 00:07:20.797 "num_base_bdevs_discovered": 0, 00:07:20.797 "num_base_bdevs_operational": 2, 00:07:20.797 "base_bdevs_list": [ 00:07:20.797 { 00:07:20.797 "name": "BaseBdev1", 00:07:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.797 "is_configured": false, 00:07:20.797 "data_offset": 0, 00:07:20.797 "data_size": 0 00:07:20.797 }, 00:07:20.797 { 00:07:20.797 "name": "BaseBdev2", 00:07:20.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.797 "is_configured": false, 00:07:20.797 "data_offset": 0, 00:07:20.797 "data_size": 0 00:07:20.797 } 00:07:20.797 ] 00:07:20.797 }' 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.797 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 [2024-11-26 21:14:39.193165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.057 [2024-11-26 21:14:39.193247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 [2024-11-26 21:14:39.205078] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.057 [2024-11-26 21:14:39.205167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.057 [2024-11-26 21:14:39.205193] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.057 [2024-11-26 21:14:39.205219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.057 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 [2024-11-26 21:14:39.253015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.315 BaseBdev1 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.315 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.316 [ 00:07:21.316 { 00:07:21.316 "name": "BaseBdev1", 00:07:21.316 "aliases": [ 00:07:21.316 "fa116fec-c920-4ca1-9698-d204aed3ef18" 00:07:21.316 ], 00:07:21.316 "product_name": "Malloc disk", 00:07:21.316 "block_size": 512, 00:07:21.316 "num_blocks": 65536, 00:07:21.316 "uuid": "fa116fec-c920-4ca1-9698-d204aed3ef18", 00:07:21.316 "assigned_rate_limits": { 00:07:21.316 "rw_ios_per_sec": 0, 00:07:21.316 "rw_mbytes_per_sec": 0, 00:07:21.316 "r_mbytes_per_sec": 0, 00:07:21.316 "w_mbytes_per_sec": 0 00:07:21.316 }, 00:07:21.316 "claimed": true, 00:07:21.316 "claim_type": "exclusive_write", 00:07:21.316 "zoned": false, 00:07:21.316 "supported_io_types": { 00:07:21.316 "read": true, 00:07:21.316 "write": true, 00:07:21.316 "unmap": true, 00:07:21.316 "flush": true, 00:07:21.316 "reset": true, 00:07:21.316 "nvme_admin": false, 00:07:21.316 "nvme_io": false, 00:07:21.316 "nvme_io_md": false, 00:07:21.316 "write_zeroes": true, 00:07:21.316 "zcopy": true, 00:07:21.316 "get_zone_info": false, 00:07:21.316 "zone_management": false, 00:07:21.316 "zone_append": false, 00:07:21.316 "compare": false, 00:07:21.316 "compare_and_write": false, 00:07:21.316 "abort": true, 00:07:21.316 "seek_hole": false, 00:07:21.316 "seek_data": false, 00:07:21.316 "copy": true, 00:07:21.316 "nvme_iov_md": false 00:07:21.316 }, 00:07:21.316 "memory_domains": [ 00:07:21.316 { 00:07:21.316 "dma_device_id": "system", 00:07:21.316 "dma_device_type": 1 00:07:21.316 }, 00:07:21.316 { 00:07:21.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.316 "dma_device_type": 2 00:07:21.316 } 00:07:21.316 ], 00:07:21.316 "driver_specific": {} 00:07:21.316 } 00:07:21.316 ] 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.316 "name": "Existed_Raid", 00:07:21.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.316 "strip_size_kb": 64, 00:07:21.316 "state": "configuring", 00:07:21.316 "raid_level": "concat", 00:07:21.316 "superblock": false, 00:07:21.316 "num_base_bdevs": 2, 00:07:21.316 "num_base_bdevs_discovered": 1, 00:07:21.316 "num_base_bdevs_operational": 2, 00:07:21.316 "base_bdevs_list": [ 00:07:21.316 { 00:07:21.316 "name": "BaseBdev1", 00:07:21.316 "uuid": "fa116fec-c920-4ca1-9698-d204aed3ef18", 00:07:21.316 "is_configured": true, 00:07:21.316 "data_offset": 0, 00:07:21.316 "data_size": 65536 00:07:21.316 }, 00:07:21.316 { 00:07:21.316 "name": "BaseBdev2", 00:07:21.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.316 "is_configured": false, 00:07:21.316 "data_offset": 0, 00:07:21.316 "data_size": 0 00:07:21.316 } 00:07:21.316 ] 00:07:21.316 }' 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.316 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 [2024-11-26 21:14:39.704267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.574 [2024-11-26 21:14:39.704324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 [2024-11-26 21:14:39.712281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.574 [2024-11-26 21:14:39.714054] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.574 [2024-11-26 21:14:39.714144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.574 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.575 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.575 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.833 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.833 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.833 "name": "Existed_Raid", 00:07:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.833 "strip_size_kb": 64, 00:07:21.833 "state": "configuring", 00:07:21.833 "raid_level": "concat", 00:07:21.833 "superblock": false, 00:07:21.833 "num_base_bdevs": 2, 00:07:21.833 "num_base_bdevs_discovered": 1, 00:07:21.833 "num_base_bdevs_operational": 2, 00:07:21.833 "base_bdevs_list": [ 00:07:21.833 { 00:07:21.833 "name": "BaseBdev1", 00:07:21.833 "uuid": "fa116fec-c920-4ca1-9698-d204aed3ef18", 00:07:21.833 "is_configured": true, 00:07:21.833 "data_offset": 0, 00:07:21.833 "data_size": 65536 00:07:21.833 }, 00:07:21.833 { 00:07:21.833 "name": "BaseBdev2", 00:07:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.833 "is_configured": false, 00:07:21.833 "data_offset": 0, 00:07:21.833 "data_size": 0 00:07:21.833 } 00:07:21.833 ] 00:07:21.833 }' 00:07:21.833 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.833 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.093 [2024-11-26 21:14:40.163484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.093 [2024-11-26 21:14:40.163624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.093 [2024-11-26 21:14:40.163650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.093 [2024-11-26 21:14:40.163960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.093 [2024-11-26 21:14:40.164205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.093 [2024-11-26 21:14:40.164254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:22.093 [2024-11-26 21:14:40.164545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.093 BaseBdev2 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.093 [ 00:07:22.093 { 00:07:22.093 "name": "BaseBdev2", 00:07:22.093 "aliases": [ 00:07:22.093 "b82a07fb-7c55-4494-b904-05c8cdfd02ba" 00:07:22.093 ], 00:07:22.093 "product_name": "Malloc disk", 00:07:22.093 "block_size": 512, 00:07:22.093 "num_blocks": 65536, 00:07:22.093 "uuid": "b82a07fb-7c55-4494-b904-05c8cdfd02ba", 00:07:22.093 "assigned_rate_limits": { 00:07:22.093 "rw_ios_per_sec": 0, 00:07:22.093 "rw_mbytes_per_sec": 0, 00:07:22.093 "r_mbytes_per_sec": 0, 00:07:22.093 "w_mbytes_per_sec": 0 00:07:22.093 }, 00:07:22.093 "claimed": true, 00:07:22.093 "claim_type": "exclusive_write", 00:07:22.093 "zoned": false, 00:07:22.093 "supported_io_types": { 00:07:22.093 "read": true, 00:07:22.093 "write": true, 00:07:22.093 "unmap": true, 00:07:22.093 "flush": true, 00:07:22.093 "reset": true, 00:07:22.093 "nvme_admin": false, 00:07:22.093 "nvme_io": false, 00:07:22.093 "nvme_io_md": false, 00:07:22.093 "write_zeroes": true, 00:07:22.093 "zcopy": true, 00:07:22.093 "get_zone_info": false, 00:07:22.093 "zone_management": false, 00:07:22.093 "zone_append": false, 00:07:22.093 "compare": false, 00:07:22.093 "compare_and_write": false, 00:07:22.093 "abort": true, 00:07:22.093 "seek_hole": false, 00:07:22.093 "seek_data": false, 00:07:22.093 "copy": true, 00:07:22.093 "nvme_iov_md": false 00:07:22.093 }, 00:07:22.093 "memory_domains": [ 00:07:22.093 { 00:07:22.093 "dma_device_id": "system", 00:07:22.093 "dma_device_type": 1 00:07:22.093 }, 00:07:22.093 { 00:07:22.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.093 "dma_device_type": 2 00:07:22.093 } 00:07:22.093 ], 00:07:22.093 "driver_specific": {} 00:07:22.093 } 00:07:22.093 ] 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.093 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.093 "name": "Existed_Raid", 00:07:22.093 "uuid": "dc418d64-7654-4b8c-92cb-0b6a39a46cb1", 00:07:22.093 "strip_size_kb": 64, 00:07:22.093 "state": "online", 00:07:22.093 "raid_level": "concat", 00:07:22.093 "superblock": false, 00:07:22.093 "num_base_bdevs": 2, 00:07:22.093 "num_base_bdevs_discovered": 2, 00:07:22.093 "num_base_bdevs_operational": 2, 00:07:22.094 "base_bdevs_list": [ 00:07:22.094 { 00:07:22.094 "name": "BaseBdev1", 00:07:22.094 "uuid": "fa116fec-c920-4ca1-9698-d204aed3ef18", 00:07:22.094 "is_configured": true, 00:07:22.094 "data_offset": 0, 00:07:22.094 "data_size": 65536 00:07:22.094 }, 00:07:22.094 { 00:07:22.094 "name": "BaseBdev2", 00:07:22.094 "uuid": "b82a07fb-7c55-4494-b904-05c8cdfd02ba", 00:07:22.094 "is_configured": true, 00:07:22.094 "data_offset": 0, 00:07:22.094 "data_size": 65536 00:07:22.094 } 00:07:22.094 ] 00:07:22.094 }' 00:07:22.094 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.094 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:22.661 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.662 [2024-11-26 21:14:40.642978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.662 "name": "Existed_Raid", 00:07:22.662 "aliases": [ 00:07:22.662 "dc418d64-7654-4b8c-92cb-0b6a39a46cb1" 00:07:22.662 ], 00:07:22.662 "product_name": "Raid Volume", 00:07:22.662 "block_size": 512, 00:07:22.662 "num_blocks": 131072, 00:07:22.662 "uuid": "dc418d64-7654-4b8c-92cb-0b6a39a46cb1", 00:07:22.662 "assigned_rate_limits": { 00:07:22.662 "rw_ios_per_sec": 0, 00:07:22.662 "rw_mbytes_per_sec": 0, 00:07:22.662 "r_mbytes_per_sec": 0, 00:07:22.662 "w_mbytes_per_sec": 0 00:07:22.662 }, 00:07:22.662 "claimed": false, 00:07:22.662 "zoned": false, 00:07:22.662 "supported_io_types": { 00:07:22.662 "read": true, 00:07:22.662 "write": true, 00:07:22.662 "unmap": true, 00:07:22.662 "flush": true, 00:07:22.662 "reset": true, 00:07:22.662 "nvme_admin": false, 00:07:22.662 "nvme_io": false, 00:07:22.662 "nvme_io_md": false, 00:07:22.662 "write_zeroes": true, 00:07:22.662 "zcopy": false, 00:07:22.662 "get_zone_info": false, 00:07:22.662 "zone_management": false, 00:07:22.662 "zone_append": false, 00:07:22.662 "compare": false, 00:07:22.662 "compare_and_write": false, 00:07:22.662 "abort": false, 00:07:22.662 "seek_hole": false, 00:07:22.662 "seek_data": false, 00:07:22.662 "copy": false, 00:07:22.662 "nvme_iov_md": false 00:07:22.662 }, 00:07:22.662 "memory_domains": [ 00:07:22.662 { 00:07:22.662 "dma_device_id": "system", 00:07:22.662 "dma_device_type": 1 00:07:22.662 }, 00:07:22.662 { 00:07:22.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.662 "dma_device_type": 2 00:07:22.662 }, 00:07:22.662 { 00:07:22.662 "dma_device_id": "system", 00:07:22.662 "dma_device_type": 1 00:07:22.662 }, 00:07:22.662 { 00:07:22.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.662 "dma_device_type": 2 00:07:22.662 } 00:07:22.662 ], 00:07:22.662 "driver_specific": { 00:07:22.662 "raid": { 00:07:22.662 "uuid": "dc418d64-7654-4b8c-92cb-0b6a39a46cb1", 00:07:22.662 "strip_size_kb": 64, 00:07:22.662 "state": "online", 00:07:22.662 "raid_level": "concat", 00:07:22.662 "superblock": false, 00:07:22.662 "num_base_bdevs": 2, 00:07:22.662 "num_base_bdevs_discovered": 2, 00:07:22.662 "num_base_bdevs_operational": 2, 00:07:22.662 "base_bdevs_list": [ 00:07:22.662 { 00:07:22.662 "name": "BaseBdev1", 00:07:22.662 "uuid": "fa116fec-c920-4ca1-9698-d204aed3ef18", 00:07:22.662 "is_configured": true, 00:07:22.662 "data_offset": 0, 00:07:22.662 "data_size": 65536 00:07:22.662 }, 00:07:22.662 { 00:07:22.662 "name": "BaseBdev2", 00:07:22.662 "uuid": "b82a07fb-7c55-4494-b904-05c8cdfd02ba", 00:07:22.662 "is_configured": true, 00:07:22.662 "data_offset": 0, 00:07:22.662 "data_size": 65536 00:07:22.662 } 00:07:22.662 ] 00:07:22.662 } 00:07:22.662 } 00:07:22.662 }' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:22.662 BaseBdev2' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.662 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.922 [2024-11-26 21:14:40.826417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:22.922 [2024-11-26 21:14:40.826451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.922 [2024-11-26 21:14:40.826499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.922 "name": "Existed_Raid", 00:07:22.922 "uuid": "dc418d64-7654-4b8c-92cb-0b6a39a46cb1", 00:07:22.922 "strip_size_kb": 64, 00:07:22.922 "state": "offline", 00:07:22.922 "raid_level": "concat", 00:07:22.922 "superblock": false, 00:07:22.922 "num_base_bdevs": 2, 00:07:22.922 "num_base_bdevs_discovered": 1, 00:07:22.922 "num_base_bdevs_operational": 1, 00:07:22.922 "base_bdevs_list": [ 00:07:22.922 { 00:07:22.922 "name": null, 00:07:22.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.922 "is_configured": false, 00:07:22.922 "data_offset": 0, 00:07:22.922 "data_size": 65536 00:07:22.922 }, 00:07:22.922 { 00:07:22.922 "name": "BaseBdev2", 00:07:22.922 "uuid": "b82a07fb-7c55-4494-b904-05c8cdfd02ba", 00:07:22.922 "is_configured": true, 00:07:22.922 "data_offset": 0, 00:07:22.922 "data_size": 65536 00:07:22.922 } 00:07:22.922 ] 00:07:22.922 }' 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.922 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.489 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 [2024-11-26 21:14:41.391835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.490 [2024-11-26 21:14:41.391891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61564 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61564 ']' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61564 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61564 00:07:23.490 killing process with pid 61564 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61564' 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61564 00:07:23.490 [2024-11-26 21:14:41.566530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.490 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61564 00:07:23.490 [2024-11-26 21:14:41.582746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:24.869 00:07:24.869 real 0m4.874s 00:07:24.869 user 0m7.048s 00:07:24.869 sys 0m0.726s 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.869 ************************************ 00:07:24.869 END TEST raid_state_function_test 00:07:24.869 ************************************ 00:07:24.869 21:14:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:24.869 21:14:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:24.869 21:14:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.869 21:14:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.869 ************************************ 00:07:24.869 START TEST raid_state_function_test_sb 00:07:24.869 ************************************ 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:24.869 Process raid pid: 61812 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61812 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61812' 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61812 00:07:24.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61812 ']' 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.869 21:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.869 [2024-11-26 21:14:42.830450] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:24.869 [2024-11-26 21:14:42.830725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.869 [2024-11-26 21:14:43.018092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.128 [2024-11-26 21:14:43.125531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.387 [2024-11-26 21:14:43.326180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.387 [2024-11-26 21:14:43.326300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.646 [2024-11-26 21:14:43.654931] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.646 [2024-11-26 21:14:43.655006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.646 [2024-11-26 21:14:43.655019] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.646 [2024-11-26 21:14:43.655028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.646 "name": "Existed_Raid", 00:07:25.646 "uuid": "fe79a490-3b51-4d52-9271-16c5b2087e4e", 00:07:25.646 "strip_size_kb": 64, 00:07:25.646 "state": "configuring", 00:07:25.646 "raid_level": "concat", 00:07:25.646 "superblock": true, 00:07:25.646 "num_base_bdevs": 2, 00:07:25.646 "num_base_bdevs_discovered": 0, 00:07:25.646 "num_base_bdevs_operational": 2, 00:07:25.646 "base_bdevs_list": [ 00:07:25.646 { 00:07:25.646 "name": "BaseBdev1", 00:07:25.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.646 "is_configured": false, 00:07:25.646 "data_offset": 0, 00:07:25.646 "data_size": 0 00:07:25.646 }, 00:07:25.646 { 00:07:25.646 "name": "BaseBdev2", 00:07:25.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.646 "is_configured": false, 00:07:25.646 "data_offset": 0, 00:07:25.646 "data_size": 0 00:07:25.646 } 00:07:25.646 ] 00:07:25.646 }' 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.646 21:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.213 [2024-11-26 21:14:44.082122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.213 [2024-11-26 21:14:44.082213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.213 [2024-11-26 21:14:44.090103] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.213 [2024-11-26 21:14:44.090181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.213 [2024-11-26 21:14:44.090209] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.213 [2024-11-26 21:14:44.090234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.213 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.213 [2024-11-26 21:14:44.135061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.214 BaseBdev1 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.214 [ 00:07:26.214 { 00:07:26.214 "name": "BaseBdev1", 00:07:26.214 "aliases": [ 00:07:26.214 "bf565fc1-53f5-4500-83bc-1f2c12e01c83" 00:07:26.214 ], 00:07:26.214 "product_name": "Malloc disk", 00:07:26.214 "block_size": 512, 00:07:26.214 "num_blocks": 65536, 00:07:26.214 "uuid": "bf565fc1-53f5-4500-83bc-1f2c12e01c83", 00:07:26.214 "assigned_rate_limits": { 00:07:26.214 "rw_ios_per_sec": 0, 00:07:26.214 "rw_mbytes_per_sec": 0, 00:07:26.214 "r_mbytes_per_sec": 0, 00:07:26.214 "w_mbytes_per_sec": 0 00:07:26.214 }, 00:07:26.214 "claimed": true, 00:07:26.214 "claim_type": "exclusive_write", 00:07:26.214 "zoned": false, 00:07:26.214 "supported_io_types": { 00:07:26.214 "read": true, 00:07:26.214 "write": true, 00:07:26.214 "unmap": true, 00:07:26.214 "flush": true, 00:07:26.214 "reset": true, 00:07:26.214 "nvme_admin": false, 00:07:26.214 "nvme_io": false, 00:07:26.214 "nvme_io_md": false, 00:07:26.214 "write_zeroes": true, 00:07:26.214 "zcopy": true, 00:07:26.214 "get_zone_info": false, 00:07:26.214 "zone_management": false, 00:07:26.214 "zone_append": false, 00:07:26.214 "compare": false, 00:07:26.214 "compare_and_write": false, 00:07:26.214 "abort": true, 00:07:26.214 "seek_hole": false, 00:07:26.214 "seek_data": false, 00:07:26.214 "copy": true, 00:07:26.214 "nvme_iov_md": false 00:07:26.214 }, 00:07:26.214 "memory_domains": [ 00:07:26.214 { 00:07:26.214 "dma_device_id": "system", 00:07:26.214 "dma_device_type": 1 00:07:26.214 }, 00:07:26.214 { 00:07:26.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.214 "dma_device_type": 2 00:07:26.214 } 00:07:26.214 ], 00:07:26.214 "driver_specific": {} 00:07:26.214 } 00:07:26.214 ] 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.214 "name": "Existed_Raid", 00:07:26.214 "uuid": "338a3b7a-cf61-4855-b07f-b4e37ce4f9ee", 00:07:26.214 "strip_size_kb": 64, 00:07:26.214 "state": "configuring", 00:07:26.214 "raid_level": "concat", 00:07:26.214 "superblock": true, 00:07:26.214 "num_base_bdevs": 2, 00:07:26.214 "num_base_bdevs_discovered": 1, 00:07:26.214 "num_base_bdevs_operational": 2, 00:07:26.214 "base_bdevs_list": [ 00:07:26.214 { 00:07:26.214 "name": "BaseBdev1", 00:07:26.214 "uuid": "bf565fc1-53f5-4500-83bc-1f2c12e01c83", 00:07:26.214 "is_configured": true, 00:07:26.214 "data_offset": 2048, 00:07:26.214 "data_size": 63488 00:07:26.214 }, 00:07:26.214 { 00:07:26.214 "name": "BaseBdev2", 00:07:26.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.214 "is_configured": false, 00:07:26.214 "data_offset": 0, 00:07:26.214 "data_size": 0 00:07:26.214 } 00:07:26.214 ] 00:07:26.214 }' 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.214 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.473 [2024-11-26 21:14:44.590325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.473 [2024-11-26 21:14:44.590380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.473 [2024-11-26 21:14:44.598342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.473 [2024-11-26 21:14:44.600164] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.473 [2024-11-26 21:14:44.600207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.473 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.731 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.731 "name": "Existed_Raid", 00:07:26.731 "uuid": "2879ed38-6056-49cf-8cc2-ab64f38b67d3", 00:07:26.731 "strip_size_kb": 64, 00:07:26.731 "state": "configuring", 00:07:26.731 "raid_level": "concat", 00:07:26.731 "superblock": true, 00:07:26.731 "num_base_bdevs": 2, 00:07:26.731 "num_base_bdevs_discovered": 1, 00:07:26.731 "num_base_bdevs_operational": 2, 00:07:26.731 "base_bdevs_list": [ 00:07:26.731 { 00:07:26.731 "name": "BaseBdev1", 00:07:26.731 "uuid": "bf565fc1-53f5-4500-83bc-1f2c12e01c83", 00:07:26.731 "is_configured": true, 00:07:26.731 "data_offset": 2048, 00:07:26.731 "data_size": 63488 00:07:26.731 }, 00:07:26.731 { 00:07:26.731 "name": "BaseBdev2", 00:07:26.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.731 "is_configured": false, 00:07:26.731 "data_offset": 0, 00:07:26.731 "data_size": 0 00:07:26.731 } 00:07:26.731 ] 00:07:26.731 }' 00:07:26.731 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.731 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.989 21:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:26.989 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.989 21:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.989 [2024-11-26 21:14:45.037139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.989 [2024-11-26 21:14:45.037450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.989 [2024-11-26 21:14:45.037505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.989 [2024-11-26 21:14:45.037778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.989 BaseBdev2 00:07:26.989 [2024-11-26 21:14:45.037987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.989 [2024-11-26 21:14:45.038006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:26.989 [2024-11-26 21:14:45.038151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.989 [ 00:07:26.989 { 00:07:26.989 "name": "BaseBdev2", 00:07:26.989 "aliases": [ 00:07:26.989 "20d5f9b2-409b-4969-a504-4b9600df50c2" 00:07:26.989 ], 00:07:26.989 "product_name": "Malloc disk", 00:07:26.989 "block_size": 512, 00:07:26.989 "num_blocks": 65536, 00:07:26.989 "uuid": "20d5f9b2-409b-4969-a504-4b9600df50c2", 00:07:26.989 "assigned_rate_limits": { 00:07:26.989 "rw_ios_per_sec": 0, 00:07:26.989 "rw_mbytes_per_sec": 0, 00:07:26.989 "r_mbytes_per_sec": 0, 00:07:26.989 "w_mbytes_per_sec": 0 00:07:26.989 }, 00:07:26.989 "claimed": true, 00:07:26.989 "claim_type": "exclusive_write", 00:07:26.989 "zoned": false, 00:07:26.989 "supported_io_types": { 00:07:26.989 "read": true, 00:07:26.989 "write": true, 00:07:26.989 "unmap": true, 00:07:26.989 "flush": true, 00:07:26.989 "reset": true, 00:07:26.989 "nvme_admin": false, 00:07:26.989 "nvme_io": false, 00:07:26.989 "nvme_io_md": false, 00:07:26.989 "write_zeroes": true, 00:07:26.989 "zcopy": true, 00:07:26.989 "get_zone_info": false, 00:07:26.989 "zone_management": false, 00:07:26.989 "zone_append": false, 00:07:26.989 "compare": false, 00:07:26.989 "compare_and_write": false, 00:07:26.989 "abort": true, 00:07:26.989 "seek_hole": false, 00:07:26.989 "seek_data": false, 00:07:26.989 "copy": true, 00:07:26.989 "nvme_iov_md": false 00:07:26.989 }, 00:07:26.989 "memory_domains": [ 00:07:26.989 { 00:07:26.989 "dma_device_id": "system", 00:07:26.989 "dma_device_type": 1 00:07:26.989 }, 00:07:26.989 { 00:07:26.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.989 "dma_device_type": 2 00:07:26.989 } 00:07:26.989 ], 00:07:26.989 "driver_specific": {} 00:07:26.989 } 00:07:26.989 ] 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.989 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.990 "name": "Existed_Raid", 00:07:26.990 "uuid": "2879ed38-6056-49cf-8cc2-ab64f38b67d3", 00:07:26.990 "strip_size_kb": 64, 00:07:26.990 "state": "online", 00:07:26.990 "raid_level": "concat", 00:07:26.990 "superblock": true, 00:07:26.990 "num_base_bdevs": 2, 00:07:26.990 "num_base_bdevs_discovered": 2, 00:07:26.990 "num_base_bdevs_operational": 2, 00:07:26.990 "base_bdevs_list": [ 00:07:26.990 { 00:07:26.990 "name": "BaseBdev1", 00:07:26.990 "uuid": "bf565fc1-53f5-4500-83bc-1f2c12e01c83", 00:07:26.990 "is_configured": true, 00:07:26.990 "data_offset": 2048, 00:07:26.990 "data_size": 63488 00:07:26.990 }, 00:07:26.990 { 00:07:26.990 "name": "BaseBdev2", 00:07:26.990 "uuid": "20d5f9b2-409b-4969-a504-4b9600df50c2", 00:07:26.990 "is_configured": true, 00:07:26.990 "data_offset": 2048, 00:07:26.990 "data_size": 63488 00:07:26.990 } 00:07:26.990 ] 00:07:26.990 }' 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.990 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.556 [2024-11-26 21:14:45.508661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.556 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.557 "name": "Existed_Raid", 00:07:27.557 "aliases": [ 00:07:27.557 "2879ed38-6056-49cf-8cc2-ab64f38b67d3" 00:07:27.557 ], 00:07:27.557 "product_name": "Raid Volume", 00:07:27.557 "block_size": 512, 00:07:27.557 "num_blocks": 126976, 00:07:27.557 "uuid": "2879ed38-6056-49cf-8cc2-ab64f38b67d3", 00:07:27.557 "assigned_rate_limits": { 00:07:27.557 "rw_ios_per_sec": 0, 00:07:27.557 "rw_mbytes_per_sec": 0, 00:07:27.557 "r_mbytes_per_sec": 0, 00:07:27.557 "w_mbytes_per_sec": 0 00:07:27.557 }, 00:07:27.557 "claimed": false, 00:07:27.557 "zoned": false, 00:07:27.557 "supported_io_types": { 00:07:27.557 "read": true, 00:07:27.557 "write": true, 00:07:27.557 "unmap": true, 00:07:27.557 "flush": true, 00:07:27.557 "reset": true, 00:07:27.557 "nvme_admin": false, 00:07:27.557 "nvme_io": false, 00:07:27.557 "nvme_io_md": false, 00:07:27.557 "write_zeroes": true, 00:07:27.557 "zcopy": false, 00:07:27.557 "get_zone_info": false, 00:07:27.557 "zone_management": false, 00:07:27.557 "zone_append": false, 00:07:27.557 "compare": false, 00:07:27.557 "compare_and_write": false, 00:07:27.557 "abort": false, 00:07:27.557 "seek_hole": false, 00:07:27.557 "seek_data": false, 00:07:27.557 "copy": false, 00:07:27.557 "nvme_iov_md": false 00:07:27.557 }, 00:07:27.557 "memory_domains": [ 00:07:27.557 { 00:07:27.557 "dma_device_id": "system", 00:07:27.557 "dma_device_type": 1 00:07:27.557 }, 00:07:27.557 { 00:07:27.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.557 "dma_device_type": 2 00:07:27.557 }, 00:07:27.557 { 00:07:27.557 "dma_device_id": "system", 00:07:27.557 "dma_device_type": 1 00:07:27.557 }, 00:07:27.557 { 00:07:27.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.557 "dma_device_type": 2 00:07:27.557 } 00:07:27.557 ], 00:07:27.557 "driver_specific": { 00:07:27.557 "raid": { 00:07:27.557 "uuid": "2879ed38-6056-49cf-8cc2-ab64f38b67d3", 00:07:27.557 "strip_size_kb": 64, 00:07:27.557 "state": "online", 00:07:27.557 "raid_level": "concat", 00:07:27.557 "superblock": true, 00:07:27.557 "num_base_bdevs": 2, 00:07:27.557 "num_base_bdevs_discovered": 2, 00:07:27.557 "num_base_bdevs_operational": 2, 00:07:27.557 "base_bdevs_list": [ 00:07:27.557 { 00:07:27.557 "name": "BaseBdev1", 00:07:27.557 "uuid": "bf565fc1-53f5-4500-83bc-1f2c12e01c83", 00:07:27.557 "is_configured": true, 00:07:27.557 "data_offset": 2048, 00:07:27.557 "data_size": 63488 00:07:27.557 }, 00:07:27.557 { 00:07:27.557 "name": "BaseBdev2", 00:07:27.557 "uuid": "20d5f9b2-409b-4969-a504-4b9600df50c2", 00:07:27.557 "is_configured": true, 00:07:27.557 "data_offset": 2048, 00:07:27.557 "data_size": 63488 00:07:27.557 } 00:07:27.557 ] 00:07:27.557 } 00:07:27.557 } 00:07:27.557 }' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:27.557 BaseBdev2' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.557 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.815 [2024-11-26 21:14:45.740037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:27.815 [2024-11-26 21:14:45.740115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.815 [2024-11-26 21:14:45.740171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.815 "name": "Existed_Raid", 00:07:27.815 "uuid": "2879ed38-6056-49cf-8cc2-ab64f38b67d3", 00:07:27.815 "strip_size_kb": 64, 00:07:27.815 "state": "offline", 00:07:27.815 "raid_level": "concat", 00:07:27.815 "superblock": true, 00:07:27.815 "num_base_bdevs": 2, 00:07:27.815 "num_base_bdevs_discovered": 1, 00:07:27.815 "num_base_bdevs_operational": 1, 00:07:27.815 "base_bdevs_list": [ 00:07:27.815 { 00:07:27.815 "name": null, 00:07:27.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.815 "is_configured": false, 00:07:27.815 "data_offset": 0, 00:07:27.815 "data_size": 63488 00:07:27.815 }, 00:07:27.815 { 00:07:27.815 "name": "BaseBdev2", 00:07:27.815 "uuid": "20d5f9b2-409b-4969-a504-4b9600df50c2", 00:07:27.815 "is_configured": true, 00:07:27.815 "data_offset": 2048, 00:07:27.815 "data_size": 63488 00:07:27.815 } 00:07:27.815 ] 00:07:27.815 }' 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.815 21:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.381 [2024-11-26 21:14:46.277257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:28.381 [2024-11-26 21:14:46.277360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61812 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61812 ']' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61812 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61812 00:07:28.381 killing process with pid 61812 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61812' 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61812 00:07:28.381 [2024-11-26 21:14:46.465213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.381 21:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61812 00:07:28.381 [2024-11-26 21:14:46.482662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.756 21:14:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:29.756 00:07:29.756 real 0m4.835s 00:07:29.756 user 0m6.962s 00:07:29.756 sys 0m0.760s 00:07:29.756 21:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.756 21:14:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.756 ************************************ 00:07:29.756 END TEST raid_state_function_test_sb 00:07:29.756 ************************************ 00:07:29.756 21:14:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:29.756 21:14:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:29.756 21:14:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.756 21:14:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.756 ************************************ 00:07:29.756 START TEST raid_superblock_test 00:07:29.756 ************************************ 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62064 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62064 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:29.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62064 ']' 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.756 21:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.756 [2024-11-26 21:14:47.718990] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:29.756 [2024-11-26 21:14:47.719124] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62064 ] 00:07:29.756 [2024-11-26 21:14:47.873728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.015 [2024-11-26 21:14:47.985455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.273 [2024-11-26 21:14:48.187674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.273 [2024-11-26 21:14:48.187711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.532 malloc1 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.532 [2024-11-26 21:14:48.641461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.532 [2024-11-26 21:14:48.641582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.532 [2024-11-26 21:14:48.641629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:30.532 [2024-11-26 21:14:48.641700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.532 [2024-11-26 21:14:48.644189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.532 [2024-11-26 21:14:48.644277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.532 pt1 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.532 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 malloc2 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 [2024-11-26 21:14:48.696358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.791 [2024-11-26 21:14:48.696470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.791 [2024-11-26 21:14:48.696515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:30.791 [2024-11-26 21:14:48.696567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.791 [2024-11-26 21:14:48.698659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.791 [2024-11-26 21:14:48.698694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.791 pt2 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 [2024-11-26 21:14:48.704397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.791 [2024-11-26 21:14:48.706282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.791 [2024-11-26 21:14:48.706504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:30.791 [2024-11-26 21:14:48.706560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.791 [2024-11-26 21:14:48.706862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.791 [2024-11-26 21:14:48.707073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:30.791 [2024-11-26 21:14:48.707123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:30.791 [2024-11-26 21:14:48.707338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.791 "name": "raid_bdev1", 00:07:30.791 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:30.791 "strip_size_kb": 64, 00:07:30.791 "state": "online", 00:07:30.791 "raid_level": "concat", 00:07:30.791 "superblock": true, 00:07:30.791 "num_base_bdevs": 2, 00:07:30.791 "num_base_bdevs_discovered": 2, 00:07:30.791 "num_base_bdevs_operational": 2, 00:07:30.791 "base_bdevs_list": [ 00:07:30.791 { 00:07:30.791 "name": "pt1", 00:07:30.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.791 "is_configured": true, 00:07:30.791 "data_offset": 2048, 00:07:30.791 "data_size": 63488 00:07:30.791 }, 00:07:30.791 { 00:07:30.791 "name": "pt2", 00:07:30.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.791 "is_configured": true, 00:07:30.791 "data_offset": 2048, 00:07:30.791 "data_size": 63488 00:07:30.791 } 00:07:30.791 ] 00:07:30.791 }' 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.791 21:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.049 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.050 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.050 [2024-11-26 21:14:49.139938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.050 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.050 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.050 "name": "raid_bdev1", 00:07:31.050 "aliases": [ 00:07:31.050 "791538d7-a318-4fed-a28f-818329578c85" 00:07:31.050 ], 00:07:31.050 "product_name": "Raid Volume", 00:07:31.050 "block_size": 512, 00:07:31.050 "num_blocks": 126976, 00:07:31.050 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:31.050 "assigned_rate_limits": { 00:07:31.050 "rw_ios_per_sec": 0, 00:07:31.050 "rw_mbytes_per_sec": 0, 00:07:31.050 "r_mbytes_per_sec": 0, 00:07:31.050 "w_mbytes_per_sec": 0 00:07:31.050 }, 00:07:31.050 "claimed": false, 00:07:31.050 "zoned": false, 00:07:31.050 "supported_io_types": { 00:07:31.050 "read": true, 00:07:31.050 "write": true, 00:07:31.050 "unmap": true, 00:07:31.050 "flush": true, 00:07:31.050 "reset": true, 00:07:31.050 "nvme_admin": false, 00:07:31.050 "nvme_io": false, 00:07:31.050 "nvme_io_md": false, 00:07:31.050 "write_zeroes": true, 00:07:31.050 "zcopy": false, 00:07:31.050 "get_zone_info": false, 00:07:31.050 "zone_management": false, 00:07:31.050 "zone_append": false, 00:07:31.050 "compare": false, 00:07:31.050 "compare_and_write": false, 00:07:31.050 "abort": false, 00:07:31.050 "seek_hole": false, 00:07:31.050 "seek_data": false, 00:07:31.050 "copy": false, 00:07:31.050 "nvme_iov_md": false 00:07:31.050 }, 00:07:31.050 "memory_domains": [ 00:07:31.050 { 00:07:31.050 "dma_device_id": "system", 00:07:31.050 "dma_device_type": 1 00:07:31.050 }, 00:07:31.050 { 00:07:31.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.050 "dma_device_type": 2 00:07:31.050 }, 00:07:31.050 { 00:07:31.050 "dma_device_id": "system", 00:07:31.050 "dma_device_type": 1 00:07:31.050 }, 00:07:31.050 { 00:07:31.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.050 "dma_device_type": 2 00:07:31.050 } 00:07:31.050 ], 00:07:31.050 "driver_specific": { 00:07:31.050 "raid": { 00:07:31.050 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:31.050 "strip_size_kb": 64, 00:07:31.050 "state": "online", 00:07:31.050 "raid_level": "concat", 00:07:31.050 "superblock": true, 00:07:31.050 "num_base_bdevs": 2, 00:07:31.050 "num_base_bdevs_discovered": 2, 00:07:31.050 "num_base_bdevs_operational": 2, 00:07:31.050 "base_bdevs_list": [ 00:07:31.050 { 00:07:31.050 "name": "pt1", 00:07:31.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.050 "is_configured": true, 00:07:31.050 "data_offset": 2048, 00:07:31.050 "data_size": 63488 00:07:31.050 }, 00:07:31.050 { 00:07:31.050 "name": "pt2", 00:07:31.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.050 "is_configured": true, 00:07:31.050 "data_offset": 2048, 00:07:31.050 "data_size": 63488 00:07:31.050 } 00:07:31.050 ] 00:07:31.050 } 00:07:31.050 } 00:07:31.050 }' 00:07:31.050 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.308 pt2' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.308 [2024-11-26 21:14:49.391461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=791538d7-a318-4fed-a28f-818329578c85 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 791538d7-a318-4fed-a28f-818329578c85 ']' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.308 [2024-11-26 21:14:49.419151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.308 [2024-11-26 21:14:49.419223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.308 [2024-11-26 21:14:49.419321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.308 [2024-11-26 21:14:49.419387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.308 [2024-11-26 21:14:49.419400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.308 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.567 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.567 [2024-11-26 21:14:49.550962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:31.568 [2024-11-26 21:14:49.552864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:31.568 [2024-11-26 21:14:49.552932] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:31.568 [2024-11-26 21:14:49.553012] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:31.568 [2024-11-26 21:14:49.553103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.568 [2024-11-26 21:14:49.553126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:31.568 request: 00:07:31.568 { 00:07:31.568 "name": "raid_bdev1", 00:07:31.568 "raid_level": "concat", 00:07:31.568 "base_bdevs": [ 00:07:31.568 "malloc1", 00:07:31.568 "malloc2" 00:07:31.568 ], 00:07:31.568 "strip_size_kb": 64, 00:07:31.568 "superblock": false, 00:07:31.568 "method": "bdev_raid_create", 00:07:31.568 "req_id": 1 00:07:31.568 } 00:07:31.568 Got JSON-RPC error response 00:07:31.568 response: 00:07:31.568 { 00:07:31.568 "code": -17, 00:07:31.568 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:31.568 } 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.568 [2024-11-26 21:14:49.618820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.568 [2024-11-26 21:14:49.618932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.568 [2024-11-26 21:14:49.618978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:31.568 [2024-11-26 21:14:49.619032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.568 [2024-11-26 21:14:49.621263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.568 [2024-11-26 21:14:49.621350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.568 [2024-11-26 21:14:49.621479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:31.568 [2024-11-26 21:14:49.621583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.568 pt1 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.568 "name": "raid_bdev1", 00:07:31.568 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:31.568 "strip_size_kb": 64, 00:07:31.568 "state": "configuring", 00:07:31.568 "raid_level": "concat", 00:07:31.568 "superblock": true, 00:07:31.568 "num_base_bdevs": 2, 00:07:31.568 "num_base_bdevs_discovered": 1, 00:07:31.568 "num_base_bdevs_operational": 2, 00:07:31.568 "base_bdevs_list": [ 00:07:31.568 { 00:07:31.568 "name": "pt1", 00:07:31.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.568 "is_configured": true, 00:07:31.568 "data_offset": 2048, 00:07:31.568 "data_size": 63488 00:07:31.568 }, 00:07:31.568 { 00:07:31.568 "name": null, 00:07:31.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.568 "is_configured": false, 00:07:31.568 "data_offset": 2048, 00:07:31.568 "data_size": 63488 00:07:31.568 } 00:07:31.568 ] 00:07:31.568 }' 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.568 21:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.137 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:32.137 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:32.137 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:32.137 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.137 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.138 [2024-11-26 21:14:50.046141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.138 [2024-11-26 21:14:50.046217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.138 [2024-11-26 21:14:50.046252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:32.138 [2024-11-26 21:14:50.046263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.138 [2024-11-26 21:14:50.046709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.138 [2024-11-26 21:14:50.046731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.138 [2024-11-26 21:14:50.046818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:32.138 [2024-11-26 21:14:50.046845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.138 [2024-11-26 21:14:50.046981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.138 [2024-11-26 21:14:50.046997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.138 [2024-11-26 21:14:50.047277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:32.138 [2024-11-26 21:14:50.047421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.138 [2024-11-26 21:14:50.047430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:32.138 [2024-11-26 21:14:50.047577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.138 pt2 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.138 "name": "raid_bdev1", 00:07:32.138 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:32.138 "strip_size_kb": 64, 00:07:32.138 "state": "online", 00:07:32.138 "raid_level": "concat", 00:07:32.138 "superblock": true, 00:07:32.138 "num_base_bdevs": 2, 00:07:32.138 "num_base_bdevs_discovered": 2, 00:07:32.138 "num_base_bdevs_operational": 2, 00:07:32.138 "base_bdevs_list": [ 00:07:32.138 { 00:07:32.138 "name": "pt1", 00:07:32.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.138 "is_configured": true, 00:07:32.138 "data_offset": 2048, 00:07:32.138 "data_size": 63488 00:07:32.138 }, 00:07:32.138 { 00:07:32.138 "name": "pt2", 00:07:32.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.138 "is_configured": true, 00:07:32.138 "data_offset": 2048, 00:07:32.138 "data_size": 63488 00:07:32.138 } 00:07:32.138 ] 00:07:32.138 }' 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.138 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.398 [2024-11-26 21:14:50.533479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.398 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.710 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.710 "name": "raid_bdev1", 00:07:32.710 "aliases": [ 00:07:32.710 "791538d7-a318-4fed-a28f-818329578c85" 00:07:32.710 ], 00:07:32.710 "product_name": "Raid Volume", 00:07:32.710 "block_size": 512, 00:07:32.710 "num_blocks": 126976, 00:07:32.710 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:32.710 "assigned_rate_limits": { 00:07:32.710 "rw_ios_per_sec": 0, 00:07:32.710 "rw_mbytes_per_sec": 0, 00:07:32.710 "r_mbytes_per_sec": 0, 00:07:32.710 "w_mbytes_per_sec": 0 00:07:32.710 }, 00:07:32.710 "claimed": false, 00:07:32.710 "zoned": false, 00:07:32.710 "supported_io_types": { 00:07:32.710 "read": true, 00:07:32.710 "write": true, 00:07:32.710 "unmap": true, 00:07:32.710 "flush": true, 00:07:32.710 "reset": true, 00:07:32.710 "nvme_admin": false, 00:07:32.710 "nvme_io": false, 00:07:32.710 "nvme_io_md": false, 00:07:32.710 "write_zeroes": true, 00:07:32.710 "zcopy": false, 00:07:32.710 "get_zone_info": false, 00:07:32.710 "zone_management": false, 00:07:32.710 "zone_append": false, 00:07:32.710 "compare": false, 00:07:32.710 "compare_and_write": false, 00:07:32.710 "abort": false, 00:07:32.710 "seek_hole": false, 00:07:32.710 "seek_data": false, 00:07:32.710 "copy": false, 00:07:32.710 "nvme_iov_md": false 00:07:32.710 }, 00:07:32.710 "memory_domains": [ 00:07:32.710 { 00:07:32.710 "dma_device_id": "system", 00:07:32.710 "dma_device_type": 1 00:07:32.710 }, 00:07:32.710 { 00:07:32.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.710 "dma_device_type": 2 00:07:32.710 }, 00:07:32.710 { 00:07:32.710 "dma_device_id": "system", 00:07:32.710 "dma_device_type": 1 00:07:32.710 }, 00:07:32.710 { 00:07:32.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.710 "dma_device_type": 2 00:07:32.710 } 00:07:32.710 ], 00:07:32.710 "driver_specific": { 00:07:32.710 "raid": { 00:07:32.710 "uuid": "791538d7-a318-4fed-a28f-818329578c85", 00:07:32.710 "strip_size_kb": 64, 00:07:32.710 "state": "online", 00:07:32.710 "raid_level": "concat", 00:07:32.710 "superblock": true, 00:07:32.710 "num_base_bdevs": 2, 00:07:32.710 "num_base_bdevs_discovered": 2, 00:07:32.710 "num_base_bdevs_operational": 2, 00:07:32.710 "base_bdevs_list": [ 00:07:32.710 { 00:07:32.710 "name": "pt1", 00:07:32.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.710 "is_configured": true, 00:07:32.710 "data_offset": 2048, 00:07:32.710 "data_size": 63488 00:07:32.710 }, 00:07:32.710 { 00:07:32.710 "name": "pt2", 00:07:32.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.710 "is_configured": true, 00:07:32.710 "data_offset": 2048, 00:07:32.710 "data_size": 63488 00:07:32.710 } 00:07:32.710 ] 00:07:32.710 } 00:07:32.710 } 00:07:32.710 }' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.711 pt2' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.711 [2024-11-26 21:14:50.765089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 791538d7-a318-4fed-a28f-818329578c85 '!=' 791538d7-a318-4fed-a28f-818329578c85 ']' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62064 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62064 ']' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62064 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.711 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62064 00:07:32.988 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.988 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.988 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62064' 00:07:32.988 killing process with pid 62064 00:07:32.988 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62064 00:07:32.988 [2024-11-26 21:14:50.853263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.988 [2024-11-26 21:14:50.853406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.988 21:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62064 00:07:32.988 [2024-11-26 21:14:50.853493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.988 [2024-11-26 21:14:50.853512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:32.988 [2024-11-26 21:14:51.056416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.369 21:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.369 00:07:34.369 real 0m4.516s 00:07:34.369 user 0m6.387s 00:07:34.369 sys 0m0.744s 00:07:34.369 21:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.369 ************************************ 00:07:34.369 END TEST raid_superblock_test 00:07:34.369 ************************************ 00:07:34.369 21:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.370 21:14:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:34.370 21:14:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.370 21:14:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.370 21:14:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.370 ************************************ 00:07:34.370 START TEST raid_read_error_test 00:07:34.370 ************************************ 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0nvyUh2ILb 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62276 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62276 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62276 ']' 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.370 21:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.370 [2024-11-26 21:14:52.308006] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:34.370 [2024-11-26 21:14:52.308207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62276 ] 00:07:34.370 [2024-11-26 21:14:52.483118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.630 [2024-11-26 21:14:52.598012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.891 [2024-11-26 21:14:52.791773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.891 [2024-11-26 21:14:52.791858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.151 BaseBdev1_malloc 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.151 true 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.151 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.152 [2024-11-26 21:14:53.253188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.152 [2024-11-26 21:14:53.253242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.152 [2024-11-26 21:14:53.253263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.152 [2024-11-26 21:14:53.253274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.152 [2024-11-26 21:14:53.255351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.152 [2024-11-26 21:14:53.255391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.152 BaseBdev1 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.152 BaseBdev2_malloc 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.152 true 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.152 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.412 [2024-11-26 21:14:53.306985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.412 [2024-11-26 21:14:53.307041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.412 [2024-11-26 21:14:53.307057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.412 [2024-11-26 21:14:53.307067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.412 [2024-11-26 21:14:53.309142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.412 [2024-11-26 21:14:53.309180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.412 BaseBdev2 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.412 [2024-11-26 21:14:53.315028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.412 [2024-11-26 21:14:53.316775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.412 [2024-11-26 21:14:53.317048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.412 [2024-11-26 21:14:53.317070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.412 [2024-11-26 21:14:53.317296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.412 [2024-11-26 21:14:53.317456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.412 [2024-11-26 21:14:53.317469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.412 [2024-11-26 21:14:53.317612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.412 "name": "raid_bdev1", 00:07:35.412 "uuid": "0bdfa768-8a1a-4b5e-b044-06b454cd91bc", 00:07:35.412 "strip_size_kb": 64, 00:07:35.412 "state": "online", 00:07:35.412 "raid_level": "concat", 00:07:35.412 "superblock": true, 00:07:35.412 "num_base_bdevs": 2, 00:07:35.412 "num_base_bdevs_discovered": 2, 00:07:35.412 "num_base_bdevs_operational": 2, 00:07:35.412 "base_bdevs_list": [ 00:07:35.412 { 00:07:35.412 "name": "BaseBdev1", 00:07:35.412 "uuid": "1a04a91f-35f1-5716-9c11-d1cbd2a0a2ec", 00:07:35.412 "is_configured": true, 00:07:35.412 "data_offset": 2048, 00:07:35.412 "data_size": 63488 00:07:35.412 }, 00:07:35.412 { 00:07:35.412 "name": "BaseBdev2", 00:07:35.412 "uuid": "e5eaf10d-0c29-5bc0-a524-835b121f9b4a", 00:07:35.412 "is_configured": true, 00:07:35.412 "data_offset": 2048, 00:07:35.412 "data_size": 63488 00:07:35.412 } 00:07:35.412 ] 00:07:35.412 }' 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.412 21:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.672 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.672 21:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.932 [2024-11-26 21:14:53.851412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.873 "name": "raid_bdev1", 00:07:36.873 "uuid": "0bdfa768-8a1a-4b5e-b044-06b454cd91bc", 00:07:36.873 "strip_size_kb": 64, 00:07:36.873 "state": "online", 00:07:36.873 "raid_level": "concat", 00:07:36.873 "superblock": true, 00:07:36.873 "num_base_bdevs": 2, 00:07:36.873 "num_base_bdevs_discovered": 2, 00:07:36.873 "num_base_bdevs_operational": 2, 00:07:36.873 "base_bdevs_list": [ 00:07:36.873 { 00:07:36.873 "name": "BaseBdev1", 00:07:36.873 "uuid": "1a04a91f-35f1-5716-9c11-d1cbd2a0a2ec", 00:07:36.873 "is_configured": true, 00:07:36.873 "data_offset": 2048, 00:07:36.873 "data_size": 63488 00:07:36.873 }, 00:07:36.873 { 00:07:36.873 "name": "BaseBdev2", 00:07:36.873 "uuid": "e5eaf10d-0c29-5bc0-a524-835b121f9b4a", 00:07:36.873 "is_configured": true, 00:07:36.873 "data_offset": 2048, 00:07:36.873 "data_size": 63488 00:07:36.873 } 00:07:36.873 ] 00:07:36.873 }' 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.873 21:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.134 [2024-11-26 21:14:55.256077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.134 [2024-11-26 21:14:55.256203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.134 [2024-11-26 21:14:55.259351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.134 { 00:07:37.134 "results": [ 00:07:37.134 { 00:07:37.134 "job": "raid_bdev1", 00:07:37.134 "core_mask": "0x1", 00:07:37.134 "workload": "randrw", 00:07:37.134 "percentage": 50, 00:07:37.134 "status": "finished", 00:07:37.134 "queue_depth": 1, 00:07:37.134 "io_size": 131072, 00:07:37.134 "runtime": 1.405811, 00:07:37.134 "iops": 15781.637787725376, 00:07:37.134 "mibps": 1972.704723465672, 00:07:37.134 "io_failed": 1, 00:07:37.134 "io_timeout": 0, 00:07:37.134 "avg_latency_us": 87.39080326159757, 00:07:37.134 "min_latency_us": 25.823580786026202, 00:07:37.134 "max_latency_us": 1609.7816593886462 00:07:37.134 } 00:07:37.134 ], 00:07:37.134 "core_count": 1 00:07:37.134 } 00:07:37.134 [2024-11-26 21:14:55.259451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.134 [2024-11-26 21:14:55.259500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.134 [2024-11-26 21:14:55.259520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62276 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62276 ']' 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62276 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.134 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62276 00:07:37.395 killing process with pid 62276 00:07:37.395 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.395 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.395 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62276' 00:07:37.395 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62276 00:07:37.395 [2024-11-26 21:14:55.309472] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.395 21:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62276 00:07:37.395 [2024-11-26 21:14:55.440855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0nvyUh2ILb 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:38.776 00:07:38.776 real 0m4.402s 00:07:38.776 user 0m5.353s 00:07:38.776 sys 0m0.556s 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.776 ************************************ 00:07:38.776 END TEST raid_read_error_test 00:07:38.776 ************************************ 00:07:38.776 21:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.776 21:14:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:38.776 21:14:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.776 21:14:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.776 21:14:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.776 ************************************ 00:07:38.776 START TEST raid_write_error_test 00:07:38.776 ************************************ 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.776 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9jNXqwrovi 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62416 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62416 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62416 ']' 00:07:38.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.777 21:14:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.777 [2024-11-26 21:14:56.803796] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:38.777 [2024-11-26 21:14:56.803950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62416 ] 00:07:39.036 [2024-11-26 21:14:56.987259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.036 [2024-11-26 21:14:57.099437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.295 [2024-11-26 21:14:57.296488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.295 [2024-11-26 21:14:57.296548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.556 BaseBdev1_malloc 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.556 true 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.556 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.817 [2024-11-26 21:14:57.712295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.817 [2024-11-26 21:14:57.712404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.817 [2024-11-26 21:14:57.712431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.817 [2024-11-26 21:14:57.712444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.817 [2024-11-26 21:14:57.714698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.817 [2024-11-26 21:14:57.714738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.817 BaseBdev1 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.817 BaseBdev2_malloc 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.817 true 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.817 [2024-11-26 21:14:57.778736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.817 [2024-11-26 21:14:57.778791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.817 [2024-11-26 21:14:57.778825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.817 [2024-11-26 21:14:57.778836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.817 [2024-11-26 21:14:57.781301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.817 [2024-11-26 21:14:57.781385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.817 BaseBdev2 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.817 [2024-11-26 21:14:57.790776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.817 [2024-11-26 21:14:57.792783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.817 [2024-11-26 21:14:57.792984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.817 [2024-11-26 21:14:57.793001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.817 [2024-11-26 21:14:57.793266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.817 [2024-11-26 21:14:57.793458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.817 [2024-11-26 21:14:57.793479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.817 [2024-11-26 21:14:57.793630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.817 "name": "raid_bdev1", 00:07:39.817 "uuid": "68840144-f096-4b1a-b8c0-637ad75d26b0", 00:07:39.817 "strip_size_kb": 64, 00:07:39.817 "state": "online", 00:07:39.817 "raid_level": "concat", 00:07:39.817 "superblock": true, 00:07:39.817 "num_base_bdevs": 2, 00:07:39.817 "num_base_bdevs_discovered": 2, 00:07:39.817 "num_base_bdevs_operational": 2, 00:07:39.817 "base_bdevs_list": [ 00:07:39.817 { 00:07:39.817 "name": "BaseBdev1", 00:07:39.817 "uuid": "1cdc6cce-5243-5239-8e8e-e294d1166c53", 00:07:39.817 "is_configured": true, 00:07:39.817 "data_offset": 2048, 00:07:39.817 "data_size": 63488 00:07:39.817 }, 00:07:39.817 { 00:07:39.817 "name": "BaseBdev2", 00:07:39.817 "uuid": "a873e230-cabe-5c6c-81fb-7659560b84c5", 00:07:39.817 "is_configured": true, 00:07:39.817 "data_offset": 2048, 00:07:39.817 "data_size": 63488 00:07:39.817 } 00:07:39.817 ] 00:07:39.817 }' 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.817 21:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.388 21:14:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.388 21:14:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.388 [2024-11-26 21:14:58.366997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.332 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.333 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.333 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.333 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.333 "name": "raid_bdev1", 00:07:41.333 "uuid": "68840144-f096-4b1a-b8c0-637ad75d26b0", 00:07:41.333 "strip_size_kb": 64, 00:07:41.333 "state": "online", 00:07:41.333 "raid_level": "concat", 00:07:41.333 "superblock": true, 00:07:41.333 "num_base_bdevs": 2, 00:07:41.333 "num_base_bdevs_discovered": 2, 00:07:41.333 "num_base_bdevs_operational": 2, 00:07:41.333 "base_bdevs_list": [ 00:07:41.333 { 00:07:41.333 "name": "BaseBdev1", 00:07:41.333 "uuid": "1cdc6cce-5243-5239-8e8e-e294d1166c53", 00:07:41.333 "is_configured": true, 00:07:41.333 "data_offset": 2048, 00:07:41.333 "data_size": 63488 00:07:41.333 }, 00:07:41.333 { 00:07:41.333 "name": "BaseBdev2", 00:07:41.333 "uuid": "a873e230-cabe-5c6c-81fb-7659560b84c5", 00:07:41.333 "is_configured": true, 00:07:41.333 "data_offset": 2048, 00:07:41.333 "data_size": 63488 00:07:41.333 } 00:07:41.333 ] 00:07:41.333 }' 00:07:41.333 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.333 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.598 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.598 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.598 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.598 [2024-11-26 21:14:59.705379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.598 [2024-11-26 21:14:59.705416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.598 [2024-11-26 21:14:59.707974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.598 [2024-11-26 21:14:59.708012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.598 [2024-11-26 21:14:59.708041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.598 [2024-11-26 21:14:59.708055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.598 { 00:07:41.598 "results": [ 00:07:41.598 { 00:07:41.598 "job": "raid_bdev1", 00:07:41.598 "core_mask": "0x1", 00:07:41.598 "workload": "randrw", 00:07:41.598 "percentage": 50, 00:07:41.598 "status": "finished", 00:07:41.598 "queue_depth": 1, 00:07:41.598 "io_size": 131072, 00:07:41.598 "runtime": 1.338978, 00:07:41.598 "iops": 16254.187895544213, 00:07:41.598 "mibps": 2031.7734869430267, 00:07:41.598 "io_failed": 1, 00:07:41.598 "io_timeout": 0, 00:07:41.598 "avg_latency_us": 84.85947098673104, 00:07:41.598 "min_latency_us": 25.041048034934498, 00:07:41.599 "max_latency_us": 1373.6803493449781 00:07:41.599 } 00:07:41.599 ], 00:07:41.599 "core_count": 1 00:07:41.599 } 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62416 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62416 ']' 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62416 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.599 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62416 00:07:41.859 killing process with pid 62416 00:07:41.859 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.859 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.859 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62416' 00:07:41.859 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62416 00:07:41.859 [2024-11-26 21:14:59.753868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.859 21:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62416 00:07:41.859 [2024-11-26 21:14:59.884576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9jNXqwrovi 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:43.240 00:07:43.240 real 0m4.420s 00:07:43.240 user 0m5.303s 00:07:43.240 sys 0m0.583s 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.240 ************************************ 00:07:43.240 END TEST raid_write_error_test 00:07:43.240 ************************************ 00:07:43.240 21:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.240 21:15:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.240 21:15:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.240 21:15:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.240 21:15:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.240 21:15:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.240 ************************************ 00:07:43.240 START TEST raid_state_function_test 00:07:43.240 ************************************ 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62554 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.240 Process raid pid: 62554 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62554' 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62554 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62554 ']' 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.240 21:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.240 [2024-11-26 21:15:01.262634] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:43.240 [2024-11-26 21:15:01.262858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.500 [2024-11-26 21:15:01.435678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.500 [2024-11-26 21:15:01.549226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.758 [2024-11-26 21:15:01.747211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.758 [2024-11-26 21:15:01.747362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.018 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.018 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.018 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.018 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.018 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.018 [2024-11-26 21:15:02.095360] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.018 [2024-11-26 21:15:02.095481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.018 [2024-11-26 21:15:02.095496] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.018 [2024-11-26 21:15:02.095506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.018 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.019 "name": "Existed_Raid", 00:07:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.019 "strip_size_kb": 0, 00:07:44.019 "state": "configuring", 00:07:44.019 "raid_level": "raid1", 00:07:44.019 "superblock": false, 00:07:44.019 "num_base_bdevs": 2, 00:07:44.019 "num_base_bdevs_discovered": 0, 00:07:44.019 "num_base_bdevs_operational": 2, 00:07:44.019 "base_bdevs_list": [ 00:07:44.019 { 00:07:44.019 "name": "BaseBdev1", 00:07:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.019 "is_configured": false, 00:07:44.019 "data_offset": 0, 00:07:44.019 "data_size": 0 00:07:44.019 }, 00:07:44.019 { 00:07:44.019 "name": "BaseBdev2", 00:07:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.019 "is_configured": false, 00:07:44.019 "data_offset": 0, 00:07:44.019 "data_size": 0 00:07:44.019 } 00:07:44.019 ] 00:07:44.019 }' 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.019 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.589 [2024-11-26 21:15:02.518620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.589 [2024-11-26 21:15:02.518709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.589 [2024-11-26 21:15:02.530588] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.589 [2024-11-26 21:15:02.530678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.589 [2024-11-26 21:15:02.530713] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.589 [2024-11-26 21:15:02.530743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.589 [2024-11-26 21:15:02.578885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.589 BaseBdev1 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.589 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 [ 00:07:44.590 { 00:07:44.590 "name": "BaseBdev1", 00:07:44.590 "aliases": [ 00:07:44.590 "3dd298b0-fb73-4e69-9fe8-953148af4f00" 00:07:44.590 ], 00:07:44.590 "product_name": "Malloc disk", 00:07:44.590 "block_size": 512, 00:07:44.590 "num_blocks": 65536, 00:07:44.590 "uuid": "3dd298b0-fb73-4e69-9fe8-953148af4f00", 00:07:44.590 "assigned_rate_limits": { 00:07:44.590 "rw_ios_per_sec": 0, 00:07:44.590 "rw_mbytes_per_sec": 0, 00:07:44.590 "r_mbytes_per_sec": 0, 00:07:44.590 "w_mbytes_per_sec": 0 00:07:44.590 }, 00:07:44.590 "claimed": true, 00:07:44.590 "claim_type": "exclusive_write", 00:07:44.590 "zoned": false, 00:07:44.590 "supported_io_types": { 00:07:44.590 "read": true, 00:07:44.590 "write": true, 00:07:44.590 "unmap": true, 00:07:44.590 "flush": true, 00:07:44.590 "reset": true, 00:07:44.590 "nvme_admin": false, 00:07:44.590 "nvme_io": false, 00:07:44.590 "nvme_io_md": false, 00:07:44.590 "write_zeroes": true, 00:07:44.590 "zcopy": true, 00:07:44.590 "get_zone_info": false, 00:07:44.590 "zone_management": false, 00:07:44.590 "zone_append": false, 00:07:44.590 "compare": false, 00:07:44.590 "compare_and_write": false, 00:07:44.590 "abort": true, 00:07:44.590 "seek_hole": false, 00:07:44.590 "seek_data": false, 00:07:44.590 "copy": true, 00:07:44.590 "nvme_iov_md": false 00:07:44.590 }, 00:07:44.590 "memory_domains": [ 00:07:44.590 { 00:07:44.590 "dma_device_id": "system", 00:07:44.590 "dma_device_type": 1 00:07:44.590 }, 00:07:44.590 { 00:07:44.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.590 "dma_device_type": 2 00:07:44.590 } 00:07:44.590 ], 00:07:44.590 "driver_specific": {} 00:07:44.590 } 00:07:44.590 ] 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.590 "name": "Existed_Raid", 00:07:44.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.590 "strip_size_kb": 0, 00:07:44.590 "state": "configuring", 00:07:44.590 "raid_level": "raid1", 00:07:44.590 "superblock": false, 00:07:44.590 "num_base_bdevs": 2, 00:07:44.590 "num_base_bdevs_discovered": 1, 00:07:44.590 "num_base_bdevs_operational": 2, 00:07:44.590 "base_bdevs_list": [ 00:07:44.590 { 00:07:44.590 "name": "BaseBdev1", 00:07:44.590 "uuid": "3dd298b0-fb73-4e69-9fe8-953148af4f00", 00:07:44.590 "is_configured": true, 00:07:44.590 "data_offset": 0, 00:07:44.590 "data_size": 65536 00:07:44.590 }, 00:07:44.590 { 00:07:44.590 "name": "BaseBdev2", 00:07:44.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.590 "is_configured": false, 00:07:44.590 "data_offset": 0, 00:07:44.590 "data_size": 0 00:07:44.590 } 00:07:44.590 ] 00:07:44.590 }' 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.590 21:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 [2024-11-26 21:15:03.070094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.161 [2024-11-26 21:15:03.070148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 [2024-11-26 21:15:03.082105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.161 [2024-11-26 21:15:03.083857] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.161 [2024-11-26 21:15:03.083906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.161 "name": "Existed_Raid", 00:07:45.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.161 "strip_size_kb": 0, 00:07:45.161 "state": "configuring", 00:07:45.161 "raid_level": "raid1", 00:07:45.161 "superblock": false, 00:07:45.161 "num_base_bdevs": 2, 00:07:45.161 "num_base_bdevs_discovered": 1, 00:07:45.161 "num_base_bdevs_operational": 2, 00:07:45.161 "base_bdevs_list": [ 00:07:45.161 { 00:07:45.161 "name": "BaseBdev1", 00:07:45.161 "uuid": "3dd298b0-fb73-4e69-9fe8-953148af4f00", 00:07:45.161 "is_configured": true, 00:07:45.161 "data_offset": 0, 00:07:45.161 "data_size": 65536 00:07:45.161 }, 00:07:45.161 { 00:07:45.161 "name": "BaseBdev2", 00:07:45.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.161 "is_configured": false, 00:07:45.161 "data_offset": 0, 00:07:45.161 "data_size": 0 00:07:45.161 } 00:07:45.161 ] 00:07:45.161 }' 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.161 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.421 [2024-11-26 21:15:03.516539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.421 [2024-11-26 21:15:03.516666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.421 [2024-11-26 21:15:03.516681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:45.421 [2024-11-26 21:15:03.516994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.421 [2024-11-26 21:15:03.517195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.421 [2024-11-26 21:15:03.517211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.421 [2024-11-26 21:15:03.517490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.421 BaseBdev2 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.421 [ 00:07:45.421 { 00:07:45.421 "name": "BaseBdev2", 00:07:45.421 "aliases": [ 00:07:45.421 "0678f005-b93b-4dcb-aae9-8697f0d94163" 00:07:45.421 ], 00:07:45.421 "product_name": "Malloc disk", 00:07:45.421 "block_size": 512, 00:07:45.421 "num_blocks": 65536, 00:07:45.421 "uuid": "0678f005-b93b-4dcb-aae9-8697f0d94163", 00:07:45.421 "assigned_rate_limits": { 00:07:45.421 "rw_ios_per_sec": 0, 00:07:45.421 "rw_mbytes_per_sec": 0, 00:07:45.421 "r_mbytes_per_sec": 0, 00:07:45.421 "w_mbytes_per_sec": 0 00:07:45.421 }, 00:07:45.421 "claimed": true, 00:07:45.421 "claim_type": "exclusive_write", 00:07:45.421 "zoned": false, 00:07:45.421 "supported_io_types": { 00:07:45.421 "read": true, 00:07:45.421 "write": true, 00:07:45.421 "unmap": true, 00:07:45.421 "flush": true, 00:07:45.421 "reset": true, 00:07:45.421 "nvme_admin": false, 00:07:45.421 "nvme_io": false, 00:07:45.421 "nvme_io_md": false, 00:07:45.421 "write_zeroes": true, 00:07:45.421 "zcopy": true, 00:07:45.421 "get_zone_info": false, 00:07:45.421 "zone_management": false, 00:07:45.421 "zone_append": false, 00:07:45.421 "compare": false, 00:07:45.421 "compare_and_write": false, 00:07:45.421 "abort": true, 00:07:45.421 "seek_hole": false, 00:07:45.421 "seek_data": false, 00:07:45.421 "copy": true, 00:07:45.421 "nvme_iov_md": false 00:07:45.421 }, 00:07:45.421 "memory_domains": [ 00:07:45.421 { 00:07:45.421 "dma_device_id": "system", 00:07:45.421 "dma_device_type": 1 00:07:45.421 }, 00:07:45.421 { 00:07:45.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.421 "dma_device_type": 2 00:07:45.421 } 00:07:45.421 ], 00:07:45.421 "driver_specific": {} 00:07:45.421 } 00:07:45.421 ] 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.421 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.681 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.681 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.681 "name": "Existed_Raid", 00:07:45.681 "uuid": "59a9b24f-8d27-42bc-8cdf-0117e5b574bc", 00:07:45.681 "strip_size_kb": 0, 00:07:45.681 "state": "online", 00:07:45.681 "raid_level": "raid1", 00:07:45.681 "superblock": false, 00:07:45.681 "num_base_bdevs": 2, 00:07:45.681 "num_base_bdevs_discovered": 2, 00:07:45.681 "num_base_bdevs_operational": 2, 00:07:45.681 "base_bdevs_list": [ 00:07:45.681 { 00:07:45.681 "name": "BaseBdev1", 00:07:45.681 "uuid": "3dd298b0-fb73-4e69-9fe8-953148af4f00", 00:07:45.681 "is_configured": true, 00:07:45.681 "data_offset": 0, 00:07:45.681 "data_size": 65536 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "name": "BaseBdev2", 00:07:45.681 "uuid": "0678f005-b93b-4dcb-aae9-8697f0d94163", 00:07:45.681 "is_configured": true, 00:07:45.681 "data_offset": 0, 00:07:45.681 "data_size": 65536 00:07:45.681 } 00:07:45.681 ] 00:07:45.681 }' 00:07:45.681 21:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.681 21:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.940 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.940 [2024-11-26 21:15:04.028037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.941 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.941 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.941 "name": "Existed_Raid", 00:07:45.941 "aliases": [ 00:07:45.941 "59a9b24f-8d27-42bc-8cdf-0117e5b574bc" 00:07:45.941 ], 00:07:45.941 "product_name": "Raid Volume", 00:07:45.941 "block_size": 512, 00:07:45.941 "num_blocks": 65536, 00:07:45.941 "uuid": "59a9b24f-8d27-42bc-8cdf-0117e5b574bc", 00:07:45.941 "assigned_rate_limits": { 00:07:45.941 "rw_ios_per_sec": 0, 00:07:45.941 "rw_mbytes_per_sec": 0, 00:07:45.941 "r_mbytes_per_sec": 0, 00:07:45.941 "w_mbytes_per_sec": 0 00:07:45.941 }, 00:07:45.941 "claimed": false, 00:07:45.941 "zoned": false, 00:07:45.941 "supported_io_types": { 00:07:45.941 "read": true, 00:07:45.941 "write": true, 00:07:45.941 "unmap": false, 00:07:45.941 "flush": false, 00:07:45.941 "reset": true, 00:07:45.941 "nvme_admin": false, 00:07:45.941 "nvme_io": false, 00:07:45.941 "nvme_io_md": false, 00:07:45.941 "write_zeroes": true, 00:07:45.941 "zcopy": false, 00:07:45.941 "get_zone_info": false, 00:07:45.941 "zone_management": false, 00:07:45.941 "zone_append": false, 00:07:45.941 "compare": false, 00:07:45.941 "compare_and_write": false, 00:07:45.941 "abort": false, 00:07:45.941 "seek_hole": false, 00:07:45.941 "seek_data": false, 00:07:45.941 "copy": false, 00:07:45.941 "nvme_iov_md": false 00:07:45.941 }, 00:07:45.941 "memory_domains": [ 00:07:45.941 { 00:07:45.941 "dma_device_id": "system", 00:07:45.941 "dma_device_type": 1 00:07:45.941 }, 00:07:45.941 { 00:07:45.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.941 "dma_device_type": 2 00:07:45.941 }, 00:07:45.941 { 00:07:45.941 "dma_device_id": "system", 00:07:45.941 "dma_device_type": 1 00:07:45.941 }, 00:07:45.941 { 00:07:45.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.941 "dma_device_type": 2 00:07:45.941 } 00:07:45.941 ], 00:07:45.941 "driver_specific": { 00:07:45.941 "raid": { 00:07:45.941 "uuid": "59a9b24f-8d27-42bc-8cdf-0117e5b574bc", 00:07:45.941 "strip_size_kb": 0, 00:07:45.941 "state": "online", 00:07:45.941 "raid_level": "raid1", 00:07:45.941 "superblock": false, 00:07:45.941 "num_base_bdevs": 2, 00:07:45.941 "num_base_bdevs_discovered": 2, 00:07:45.941 "num_base_bdevs_operational": 2, 00:07:45.941 "base_bdevs_list": [ 00:07:45.941 { 00:07:45.941 "name": "BaseBdev1", 00:07:45.941 "uuid": "3dd298b0-fb73-4e69-9fe8-953148af4f00", 00:07:45.941 "is_configured": true, 00:07:45.941 "data_offset": 0, 00:07:45.941 "data_size": 65536 00:07:45.941 }, 00:07:45.941 { 00:07:45.941 "name": "BaseBdev2", 00:07:45.941 "uuid": "0678f005-b93b-4dcb-aae9-8697f0d94163", 00:07:45.941 "is_configured": true, 00:07:45.941 "data_offset": 0, 00:07:45.941 "data_size": 65536 00:07:45.941 } 00:07:45.941 ] 00:07:45.941 } 00:07:45.941 } 00:07:45.941 }' 00:07:45.941 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.208 BaseBdev2' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.208 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.209 [2024-11-26 21:15:04.263438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.209 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.469 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.469 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.469 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.469 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.469 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.469 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.469 "name": "Existed_Raid", 00:07:46.469 "uuid": "59a9b24f-8d27-42bc-8cdf-0117e5b574bc", 00:07:46.469 "strip_size_kb": 0, 00:07:46.469 "state": "online", 00:07:46.469 "raid_level": "raid1", 00:07:46.469 "superblock": false, 00:07:46.469 "num_base_bdevs": 2, 00:07:46.469 "num_base_bdevs_discovered": 1, 00:07:46.469 "num_base_bdevs_operational": 1, 00:07:46.469 "base_bdevs_list": [ 00:07:46.469 { 00:07:46.469 "name": null, 00:07:46.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.469 "is_configured": false, 00:07:46.469 "data_offset": 0, 00:07:46.469 "data_size": 65536 00:07:46.469 }, 00:07:46.469 { 00:07:46.469 "name": "BaseBdev2", 00:07:46.469 "uuid": "0678f005-b93b-4dcb-aae9-8697f0d94163", 00:07:46.470 "is_configured": true, 00:07:46.470 "data_offset": 0, 00:07:46.470 "data_size": 65536 00:07:46.470 } 00:07:46.470 ] 00:07:46.470 }' 00:07:46.470 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.470 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.730 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 [2024-11-26 21:15:04.810195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.731 [2024-11-26 21:15:04.810293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.991 [2024-11-26 21:15:04.902068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.991 [2024-11-26 21:15:04.902198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.991 [2024-11-26 21:15:04.902215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62554 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62554 ']' 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62554 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62554 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62554' 00:07:46.991 killing process with pid 62554 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62554 00:07:46.991 [2024-11-26 21:15:04.984444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.991 21:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62554 00:07:46.991 [2024-11-26 21:15:05.002756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.372 00:07:48.372 real 0m4.979s 00:07:48.372 user 0m7.186s 00:07:48.372 sys 0m0.761s 00:07:48.372 ************************************ 00:07:48.372 END TEST raid_state_function_test 00:07:48.372 ************************************ 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.372 21:15:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:48.372 21:15:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.372 21:15:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.372 21:15:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.372 ************************************ 00:07:48.372 START TEST raid_state_function_test_sb 00:07:48.372 ************************************ 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62807 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62807' 00:07:48.372 Process raid pid: 62807 00:07:48.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62807 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62807 ']' 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.372 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.372 [2024-11-26 21:15:06.311142] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:48.372 [2024-11-26 21:15:06.311338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.372 [2024-11-26 21:15:06.485018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.631 [2024-11-26 21:15:06.596582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.899 [2024-11-26 21:15:06.802949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.899 [2024-11-26 21:15:06.803053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.174 [2024-11-26 21:15:07.164079] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.174 [2024-11-26 21:15:07.164206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.174 [2024-11-26 21:15:07.164240] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.174 [2024-11-26 21:15:07.164266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.174 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.175 "name": "Existed_Raid", 00:07:49.175 "uuid": "cddef560-0da7-459b-8d98-f416509d8653", 00:07:49.175 "strip_size_kb": 0, 00:07:49.175 "state": "configuring", 00:07:49.175 "raid_level": "raid1", 00:07:49.175 "superblock": true, 00:07:49.175 "num_base_bdevs": 2, 00:07:49.175 "num_base_bdevs_discovered": 0, 00:07:49.175 "num_base_bdevs_operational": 2, 00:07:49.175 "base_bdevs_list": [ 00:07:49.175 { 00:07:49.175 "name": "BaseBdev1", 00:07:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.175 "is_configured": false, 00:07:49.175 "data_offset": 0, 00:07:49.175 "data_size": 0 00:07:49.175 }, 00:07:49.175 { 00:07:49.175 "name": "BaseBdev2", 00:07:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.175 "is_configured": false, 00:07:49.175 "data_offset": 0, 00:07:49.175 "data_size": 0 00:07:49.175 } 00:07:49.175 ] 00:07:49.175 }' 00:07:49.175 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.175 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.742 [2024-11-26 21:15:07.615245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.742 [2024-11-26 21:15:07.615347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.742 [2024-11-26 21:15:07.627215] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.742 [2024-11-26 21:15:07.627301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.742 [2024-11-26 21:15:07.627341] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.742 [2024-11-26 21:15:07.627371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.742 [2024-11-26 21:15:07.676248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.742 BaseBdev1 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.742 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.743 [ 00:07:49.743 { 00:07:49.743 "name": "BaseBdev1", 00:07:49.743 "aliases": [ 00:07:49.743 "d8096d4d-280c-4880-89a9-ce190b69327c" 00:07:49.743 ], 00:07:49.743 "product_name": "Malloc disk", 00:07:49.743 "block_size": 512, 00:07:49.743 "num_blocks": 65536, 00:07:49.743 "uuid": "d8096d4d-280c-4880-89a9-ce190b69327c", 00:07:49.743 "assigned_rate_limits": { 00:07:49.743 "rw_ios_per_sec": 0, 00:07:49.743 "rw_mbytes_per_sec": 0, 00:07:49.743 "r_mbytes_per_sec": 0, 00:07:49.743 "w_mbytes_per_sec": 0 00:07:49.743 }, 00:07:49.743 "claimed": true, 00:07:49.743 "claim_type": "exclusive_write", 00:07:49.743 "zoned": false, 00:07:49.743 "supported_io_types": { 00:07:49.743 "read": true, 00:07:49.743 "write": true, 00:07:49.743 "unmap": true, 00:07:49.743 "flush": true, 00:07:49.743 "reset": true, 00:07:49.743 "nvme_admin": false, 00:07:49.743 "nvme_io": false, 00:07:49.743 "nvme_io_md": false, 00:07:49.743 "write_zeroes": true, 00:07:49.743 "zcopy": true, 00:07:49.743 "get_zone_info": false, 00:07:49.743 "zone_management": false, 00:07:49.743 "zone_append": false, 00:07:49.743 "compare": false, 00:07:49.743 "compare_and_write": false, 00:07:49.743 "abort": true, 00:07:49.743 "seek_hole": false, 00:07:49.743 "seek_data": false, 00:07:49.743 "copy": true, 00:07:49.743 "nvme_iov_md": false 00:07:49.743 }, 00:07:49.743 "memory_domains": [ 00:07:49.743 { 00:07:49.743 "dma_device_id": "system", 00:07:49.743 "dma_device_type": 1 00:07:49.743 }, 00:07:49.743 { 00:07:49.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.743 "dma_device_type": 2 00:07:49.743 } 00:07:49.743 ], 00:07:49.743 "driver_specific": {} 00:07:49.743 } 00:07:49.743 ] 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.743 "name": "Existed_Raid", 00:07:49.743 "uuid": "c2af9dda-0046-4376-ad2f-a3505e9e7d7f", 00:07:49.743 "strip_size_kb": 0, 00:07:49.743 "state": "configuring", 00:07:49.743 "raid_level": "raid1", 00:07:49.743 "superblock": true, 00:07:49.743 "num_base_bdevs": 2, 00:07:49.743 "num_base_bdevs_discovered": 1, 00:07:49.743 "num_base_bdevs_operational": 2, 00:07:49.743 "base_bdevs_list": [ 00:07:49.743 { 00:07:49.743 "name": "BaseBdev1", 00:07:49.743 "uuid": "d8096d4d-280c-4880-89a9-ce190b69327c", 00:07:49.743 "is_configured": true, 00:07:49.743 "data_offset": 2048, 00:07:49.743 "data_size": 63488 00:07:49.743 }, 00:07:49.743 { 00:07:49.743 "name": "BaseBdev2", 00:07:49.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.743 "is_configured": false, 00:07:49.743 "data_offset": 0, 00:07:49.743 "data_size": 0 00:07:49.743 } 00:07:49.743 ] 00:07:49.743 }' 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.743 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.310 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.310 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.310 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.310 [2024-11-26 21:15:08.171526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.311 [2024-11-26 21:15:08.171668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.311 [2024-11-26 21:15:08.183537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.311 [2024-11-26 21:15:08.185392] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.311 [2024-11-26 21:15:08.185480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.311 "name": "Existed_Raid", 00:07:50.311 "uuid": "755647c7-d102-4711-a43b-bf5a2b4c1a56", 00:07:50.311 "strip_size_kb": 0, 00:07:50.311 "state": "configuring", 00:07:50.311 "raid_level": "raid1", 00:07:50.311 "superblock": true, 00:07:50.311 "num_base_bdevs": 2, 00:07:50.311 "num_base_bdevs_discovered": 1, 00:07:50.311 "num_base_bdevs_operational": 2, 00:07:50.311 "base_bdevs_list": [ 00:07:50.311 { 00:07:50.311 "name": "BaseBdev1", 00:07:50.311 "uuid": "d8096d4d-280c-4880-89a9-ce190b69327c", 00:07:50.311 "is_configured": true, 00:07:50.311 "data_offset": 2048, 00:07:50.311 "data_size": 63488 00:07:50.311 }, 00:07:50.311 { 00:07:50.311 "name": "BaseBdev2", 00:07:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.311 "is_configured": false, 00:07:50.311 "data_offset": 0, 00:07:50.311 "data_size": 0 00:07:50.311 } 00:07:50.311 ] 00:07:50.311 }' 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.311 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.570 [2024-11-26 21:15:08.717349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.570 [2024-11-26 21:15:08.717697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.570 [2024-11-26 21:15:08.717755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.570 [2024-11-26 21:15:08.718098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.570 [2024-11-26 21:15:08.718327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.570 BaseBdev2 00:07:50.570 [2024-11-26 21:15:08.718382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.570 [2024-11-26 21:15:08.718598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.570 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.829 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.829 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.829 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.829 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.829 [ 00:07:50.829 { 00:07:50.829 "name": "BaseBdev2", 00:07:50.829 "aliases": [ 00:07:50.829 "0bee1cb3-cda1-47ee-ab34-a4aef2c688c1" 00:07:50.829 ], 00:07:50.829 "product_name": "Malloc disk", 00:07:50.829 "block_size": 512, 00:07:50.829 "num_blocks": 65536, 00:07:50.829 "uuid": "0bee1cb3-cda1-47ee-ab34-a4aef2c688c1", 00:07:50.829 "assigned_rate_limits": { 00:07:50.829 "rw_ios_per_sec": 0, 00:07:50.829 "rw_mbytes_per_sec": 0, 00:07:50.829 "r_mbytes_per_sec": 0, 00:07:50.829 "w_mbytes_per_sec": 0 00:07:50.829 }, 00:07:50.829 "claimed": true, 00:07:50.829 "claim_type": "exclusive_write", 00:07:50.829 "zoned": false, 00:07:50.829 "supported_io_types": { 00:07:50.829 "read": true, 00:07:50.829 "write": true, 00:07:50.829 "unmap": true, 00:07:50.829 "flush": true, 00:07:50.829 "reset": true, 00:07:50.829 "nvme_admin": false, 00:07:50.829 "nvme_io": false, 00:07:50.829 "nvme_io_md": false, 00:07:50.829 "write_zeroes": true, 00:07:50.829 "zcopy": true, 00:07:50.829 "get_zone_info": false, 00:07:50.829 "zone_management": false, 00:07:50.829 "zone_append": false, 00:07:50.829 "compare": false, 00:07:50.829 "compare_and_write": false, 00:07:50.829 "abort": true, 00:07:50.830 "seek_hole": false, 00:07:50.830 "seek_data": false, 00:07:50.830 "copy": true, 00:07:50.830 "nvme_iov_md": false 00:07:50.830 }, 00:07:50.830 "memory_domains": [ 00:07:50.830 { 00:07:50.830 "dma_device_id": "system", 00:07:50.830 "dma_device_type": 1 00:07:50.830 }, 00:07:50.830 { 00:07:50.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.830 "dma_device_type": 2 00:07:50.830 } 00:07:50.830 ], 00:07:50.830 "driver_specific": {} 00:07:50.830 } 00:07:50.830 ] 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.830 "name": "Existed_Raid", 00:07:50.830 "uuid": "755647c7-d102-4711-a43b-bf5a2b4c1a56", 00:07:50.830 "strip_size_kb": 0, 00:07:50.830 "state": "online", 00:07:50.830 "raid_level": "raid1", 00:07:50.830 "superblock": true, 00:07:50.830 "num_base_bdevs": 2, 00:07:50.830 "num_base_bdevs_discovered": 2, 00:07:50.830 "num_base_bdevs_operational": 2, 00:07:50.830 "base_bdevs_list": [ 00:07:50.830 { 00:07:50.830 "name": "BaseBdev1", 00:07:50.830 "uuid": "d8096d4d-280c-4880-89a9-ce190b69327c", 00:07:50.830 "is_configured": true, 00:07:50.830 "data_offset": 2048, 00:07:50.830 "data_size": 63488 00:07:50.830 }, 00:07:50.830 { 00:07:50.830 "name": "BaseBdev2", 00:07:50.830 "uuid": "0bee1cb3-cda1-47ee-ab34-a4aef2c688c1", 00:07:50.830 "is_configured": true, 00:07:50.830 "data_offset": 2048, 00:07:50.830 "data_size": 63488 00:07:50.830 } 00:07:50.830 ] 00:07:50.830 }' 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.830 21:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.089 [2024-11-26 21:15:09.220873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.089 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.347 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.347 "name": "Existed_Raid", 00:07:51.347 "aliases": [ 00:07:51.347 "755647c7-d102-4711-a43b-bf5a2b4c1a56" 00:07:51.347 ], 00:07:51.347 "product_name": "Raid Volume", 00:07:51.347 "block_size": 512, 00:07:51.347 "num_blocks": 63488, 00:07:51.347 "uuid": "755647c7-d102-4711-a43b-bf5a2b4c1a56", 00:07:51.347 "assigned_rate_limits": { 00:07:51.347 "rw_ios_per_sec": 0, 00:07:51.347 "rw_mbytes_per_sec": 0, 00:07:51.347 "r_mbytes_per_sec": 0, 00:07:51.347 "w_mbytes_per_sec": 0 00:07:51.347 }, 00:07:51.347 "claimed": false, 00:07:51.347 "zoned": false, 00:07:51.347 "supported_io_types": { 00:07:51.347 "read": true, 00:07:51.347 "write": true, 00:07:51.347 "unmap": false, 00:07:51.347 "flush": false, 00:07:51.347 "reset": true, 00:07:51.347 "nvme_admin": false, 00:07:51.347 "nvme_io": false, 00:07:51.347 "nvme_io_md": false, 00:07:51.347 "write_zeroes": true, 00:07:51.347 "zcopy": false, 00:07:51.347 "get_zone_info": false, 00:07:51.347 "zone_management": false, 00:07:51.347 "zone_append": false, 00:07:51.347 "compare": false, 00:07:51.347 "compare_and_write": false, 00:07:51.347 "abort": false, 00:07:51.347 "seek_hole": false, 00:07:51.347 "seek_data": false, 00:07:51.347 "copy": false, 00:07:51.347 "nvme_iov_md": false 00:07:51.347 }, 00:07:51.347 "memory_domains": [ 00:07:51.347 { 00:07:51.347 "dma_device_id": "system", 00:07:51.347 "dma_device_type": 1 00:07:51.347 }, 00:07:51.347 { 00:07:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.347 "dma_device_type": 2 00:07:51.347 }, 00:07:51.347 { 00:07:51.347 "dma_device_id": "system", 00:07:51.347 "dma_device_type": 1 00:07:51.347 }, 00:07:51.347 { 00:07:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.347 "dma_device_type": 2 00:07:51.347 } 00:07:51.347 ], 00:07:51.347 "driver_specific": { 00:07:51.347 "raid": { 00:07:51.347 "uuid": "755647c7-d102-4711-a43b-bf5a2b4c1a56", 00:07:51.347 "strip_size_kb": 0, 00:07:51.348 "state": "online", 00:07:51.348 "raid_level": "raid1", 00:07:51.348 "superblock": true, 00:07:51.348 "num_base_bdevs": 2, 00:07:51.348 "num_base_bdevs_discovered": 2, 00:07:51.348 "num_base_bdevs_operational": 2, 00:07:51.348 "base_bdevs_list": [ 00:07:51.348 { 00:07:51.348 "name": "BaseBdev1", 00:07:51.348 "uuid": "d8096d4d-280c-4880-89a9-ce190b69327c", 00:07:51.348 "is_configured": true, 00:07:51.348 "data_offset": 2048, 00:07:51.348 "data_size": 63488 00:07:51.348 }, 00:07:51.348 { 00:07:51.348 "name": "BaseBdev2", 00:07:51.348 "uuid": "0bee1cb3-cda1-47ee-ab34-a4aef2c688c1", 00:07:51.348 "is_configured": true, 00:07:51.348 "data_offset": 2048, 00:07:51.348 "data_size": 63488 00:07:51.348 } 00:07:51.348 ] 00:07:51.348 } 00:07:51.348 } 00:07:51.348 }' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.348 BaseBdev2' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.348 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.348 [2024-11-26 21:15:09.420262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.606 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.607 "name": "Existed_Raid", 00:07:51.607 "uuid": "755647c7-d102-4711-a43b-bf5a2b4c1a56", 00:07:51.607 "strip_size_kb": 0, 00:07:51.607 "state": "online", 00:07:51.607 "raid_level": "raid1", 00:07:51.607 "superblock": true, 00:07:51.607 "num_base_bdevs": 2, 00:07:51.607 "num_base_bdevs_discovered": 1, 00:07:51.607 "num_base_bdevs_operational": 1, 00:07:51.607 "base_bdevs_list": [ 00:07:51.607 { 00:07:51.607 "name": null, 00:07:51.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.607 "is_configured": false, 00:07:51.607 "data_offset": 0, 00:07:51.607 "data_size": 63488 00:07:51.607 }, 00:07:51.607 { 00:07:51.607 "name": "BaseBdev2", 00:07:51.607 "uuid": "0bee1cb3-cda1-47ee-ab34-a4aef2c688c1", 00:07:51.607 "is_configured": true, 00:07:51.607 "data_offset": 2048, 00:07:51.607 "data_size": 63488 00:07:51.607 } 00:07:51.607 ] 00:07:51.607 }' 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.607 21:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.866 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.125 [2024-11-26 21:15:10.064959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.125 [2024-11-26 21:15:10.065083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.125 [2024-11-26 21:15:10.172160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.125 [2024-11-26 21:15:10.172303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.125 [2024-11-26 21:15:10.172323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62807 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62807 ']' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62807 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62807 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.125 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62807' 00:07:52.126 killing process with pid 62807 00:07:52.126 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62807 00:07:52.126 [2024-11-26 21:15:10.266921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.126 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62807 00:07:52.384 [2024-11-26 21:15:10.286360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.764 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.764 00:07:53.764 real 0m5.271s 00:07:53.764 user 0m7.589s 00:07:53.764 sys 0m0.847s 00:07:53.764 ************************************ 00:07:53.764 END TEST raid_state_function_test_sb 00:07:53.764 ************************************ 00:07:53.764 21:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.764 21:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.764 21:15:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:53.764 21:15:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:53.764 21:15:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.764 21:15:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.764 ************************************ 00:07:53.764 START TEST raid_superblock_test 00:07:53.764 ************************************ 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63059 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63059 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63059 ']' 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.764 21:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.764 [2024-11-26 21:15:11.661656] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:53.764 [2024-11-26 21:15:11.661860] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63059 ] 00:07:53.764 [2024-11-26 21:15:11.837793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.024 [2024-11-26 21:15:11.954005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.024 [2024-11-26 21:15:12.172281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.024 [2024-11-26 21:15:12.172373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.595 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 malloc1 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 [2024-11-26 21:15:12.602935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.596 [2024-11-26 21:15:12.603015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.596 [2024-11-26 21:15:12.603038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.596 [2024-11-26 21:15:12.603050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.596 [2024-11-26 21:15:12.605483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.596 [2024-11-26 21:15:12.605521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.596 pt1 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 malloc2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 [2024-11-26 21:15:12.663162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.596 [2024-11-26 21:15:12.663220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.596 [2024-11-26 21:15:12.663245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.596 [2024-11-26 21:15:12.663255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.596 [2024-11-26 21:15:12.665587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.596 [2024-11-26 21:15:12.665710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.596 pt2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 [2024-11-26 21:15:12.675189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.596 [2024-11-26 21:15:12.677224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.596 [2024-11-26 21:15:12.677411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:54.596 [2024-11-26 21:15:12.677430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.596 [2024-11-26 21:15:12.677723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.596 [2024-11-26 21:15:12.677901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:54.596 [2024-11-26 21:15:12.677919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:54.596 [2024-11-26 21:15:12.678118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.596 "name": "raid_bdev1", 00:07:54.596 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:54.596 "strip_size_kb": 0, 00:07:54.596 "state": "online", 00:07:54.596 "raid_level": "raid1", 00:07:54.596 "superblock": true, 00:07:54.596 "num_base_bdevs": 2, 00:07:54.596 "num_base_bdevs_discovered": 2, 00:07:54.596 "num_base_bdevs_operational": 2, 00:07:54.596 "base_bdevs_list": [ 00:07:54.596 { 00:07:54.596 "name": "pt1", 00:07:54.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.596 "is_configured": true, 00:07:54.596 "data_offset": 2048, 00:07:54.596 "data_size": 63488 00:07:54.596 }, 00:07:54.596 { 00:07:54.596 "name": "pt2", 00:07:54.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.596 "is_configured": true, 00:07:54.596 "data_offset": 2048, 00:07:54.596 "data_size": 63488 00:07:54.596 } 00:07:54.596 ] 00:07:54.596 }' 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.596 21:15:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.166 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.166 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.166 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.166 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.166 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.166 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.167 [2024-11-26 21:15:13.122801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.167 "name": "raid_bdev1", 00:07:55.167 "aliases": [ 00:07:55.167 "3eef1184-a26e-4636-850d-ea2d31e74c66" 00:07:55.167 ], 00:07:55.167 "product_name": "Raid Volume", 00:07:55.167 "block_size": 512, 00:07:55.167 "num_blocks": 63488, 00:07:55.167 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:55.167 "assigned_rate_limits": { 00:07:55.167 "rw_ios_per_sec": 0, 00:07:55.167 "rw_mbytes_per_sec": 0, 00:07:55.167 "r_mbytes_per_sec": 0, 00:07:55.167 "w_mbytes_per_sec": 0 00:07:55.167 }, 00:07:55.167 "claimed": false, 00:07:55.167 "zoned": false, 00:07:55.167 "supported_io_types": { 00:07:55.167 "read": true, 00:07:55.167 "write": true, 00:07:55.167 "unmap": false, 00:07:55.167 "flush": false, 00:07:55.167 "reset": true, 00:07:55.167 "nvme_admin": false, 00:07:55.167 "nvme_io": false, 00:07:55.167 "nvme_io_md": false, 00:07:55.167 "write_zeroes": true, 00:07:55.167 "zcopy": false, 00:07:55.167 "get_zone_info": false, 00:07:55.167 "zone_management": false, 00:07:55.167 "zone_append": false, 00:07:55.167 "compare": false, 00:07:55.167 "compare_and_write": false, 00:07:55.167 "abort": false, 00:07:55.167 "seek_hole": false, 00:07:55.167 "seek_data": false, 00:07:55.167 "copy": false, 00:07:55.167 "nvme_iov_md": false 00:07:55.167 }, 00:07:55.167 "memory_domains": [ 00:07:55.167 { 00:07:55.167 "dma_device_id": "system", 00:07:55.167 "dma_device_type": 1 00:07:55.167 }, 00:07:55.167 { 00:07:55.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.167 "dma_device_type": 2 00:07:55.167 }, 00:07:55.167 { 00:07:55.167 "dma_device_id": "system", 00:07:55.167 "dma_device_type": 1 00:07:55.167 }, 00:07:55.167 { 00:07:55.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.167 "dma_device_type": 2 00:07:55.167 } 00:07:55.167 ], 00:07:55.167 "driver_specific": { 00:07:55.167 "raid": { 00:07:55.167 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:55.167 "strip_size_kb": 0, 00:07:55.167 "state": "online", 00:07:55.167 "raid_level": "raid1", 00:07:55.167 "superblock": true, 00:07:55.167 "num_base_bdevs": 2, 00:07:55.167 "num_base_bdevs_discovered": 2, 00:07:55.167 "num_base_bdevs_operational": 2, 00:07:55.167 "base_bdevs_list": [ 00:07:55.167 { 00:07:55.167 "name": "pt1", 00:07:55.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.167 "is_configured": true, 00:07:55.167 "data_offset": 2048, 00:07:55.167 "data_size": 63488 00:07:55.167 }, 00:07:55.167 { 00:07:55.167 "name": "pt2", 00:07:55.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.167 "is_configured": true, 00:07:55.167 "data_offset": 2048, 00:07:55.167 "data_size": 63488 00:07:55.167 } 00:07:55.167 ] 00:07:55.167 } 00:07:55.167 } 00:07:55.167 }' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.167 pt2' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.167 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:55.428 [2024-11-26 21:15:13.382387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3eef1184-a26e-4636-850d-ea2d31e74c66 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3eef1184-a26e-4636-850d-ea2d31e74c66 ']' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 [2024-11-26 21:15:13.433939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.428 [2024-11-26 21:15:13.433980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.428 [2024-11-26 21:15:13.434078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.428 [2024-11-26 21:15:13.434145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.428 [2024-11-26 21:15:13.434160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.428 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.429 [2024-11-26 21:15:13.569743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:55.429 [2024-11-26 21:15:13.571905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:55.429 [2024-11-26 21:15:13.572072] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:55.429 [2024-11-26 21:15:13.572147] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:55.429 [2024-11-26 21:15:13.572169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.429 [2024-11-26 21:15:13.572183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:55.429 request: 00:07:55.429 { 00:07:55.429 "name": "raid_bdev1", 00:07:55.429 "raid_level": "raid1", 00:07:55.429 "base_bdevs": [ 00:07:55.429 "malloc1", 00:07:55.429 "malloc2" 00:07:55.429 ], 00:07:55.429 "superblock": false, 00:07:55.429 "method": "bdev_raid_create", 00:07:55.429 "req_id": 1 00:07:55.429 } 00:07:55.429 Got JSON-RPC error response 00:07:55.429 response: 00:07:55.429 { 00:07:55.429 "code": -17, 00:07:55.429 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:55.429 } 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.429 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.698 [2024-11-26 21:15:13.637631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.698 [2024-11-26 21:15:13.637774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.698 [2024-11-26 21:15:13.637814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.698 [2024-11-26 21:15:13.637847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.698 [2024-11-26 21:15:13.640348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.698 [2024-11-26 21:15:13.640433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.698 [2024-11-26 21:15:13.640558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.698 [2024-11-26 21:15:13.640650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.698 pt1 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.698 "name": "raid_bdev1", 00:07:55.698 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:55.698 "strip_size_kb": 0, 00:07:55.698 "state": "configuring", 00:07:55.698 "raid_level": "raid1", 00:07:55.698 "superblock": true, 00:07:55.698 "num_base_bdevs": 2, 00:07:55.698 "num_base_bdevs_discovered": 1, 00:07:55.698 "num_base_bdevs_operational": 2, 00:07:55.698 "base_bdevs_list": [ 00:07:55.698 { 00:07:55.698 "name": "pt1", 00:07:55.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.698 "is_configured": true, 00:07:55.698 "data_offset": 2048, 00:07:55.698 "data_size": 63488 00:07:55.698 }, 00:07:55.698 { 00:07:55.698 "name": null, 00:07:55.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.698 "is_configured": false, 00:07:55.698 "data_offset": 2048, 00:07:55.698 "data_size": 63488 00:07:55.698 } 00:07:55.698 ] 00:07:55.698 }' 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.698 21:15:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.975 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.975 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.975 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.975 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.975 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.975 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.975 [2024-11-26 21:15:14.112925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.976 [2024-11-26 21:15:14.113077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.976 [2024-11-26 21:15:14.113145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.976 [2024-11-26 21:15:14.113184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.976 [2024-11-26 21:15:14.113735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.976 [2024-11-26 21:15:14.113806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.976 [2024-11-26 21:15:14.113937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.976 [2024-11-26 21:15:14.114013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.976 [2024-11-26 21:15:14.114199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.976 [2024-11-26 21:15:14.114246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.976 [2024-11-26 21:15:14.114561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:55.976 [2024-11-26 21:15:14.114772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.976 [2024-11-26 21:15:14.114817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.976 [2024-11-26 21:15:14.115038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.976 pt2 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.976 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.234 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.234 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.234 "name": "raid_bdev1", 00:07:56.234 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:56.234 "strip_size_kb": 0, 00:07:56.234 "state": "online", 00:07:56.234 "raid_level": "raid1", 00:07:56.234 "superblock": true, 00:07:56.234 "num_base_bdevs": 2, 00:07:56.234 "num_base_bdevs_discovered": 2, 00:07:56.234 "num_base_bdevs_operational": 2, 00:07:56.234 "base_bdevs_list": [ 00:07:56.234 { 00:07:56.234 "name": "pt1", 00:07:56.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.234 "is_configured": true, 00:07:56.234 "data_offset": 2048, 00:07:56.234 "data_size": 63488 00:07:56.234 }, 00:07:56.234 { 00:07:56.234 "name": "pt2", 00:07:56.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.234 "is_configured": true, 00:07:56.234 "data_offset": 2048, 00:07:56.234 "data_size": 63488 00:07:56.234 } 00:07:56.234 ] 00:07:56.234 }' 00:07:56.234 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.234 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 [2024-11-26 21:15:14.564461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.493 "name": "raid_bdev1", 00:07:56.493 "aliases": [ 00:07:56.493 "3eef1184-a26e-4636-850d-ea2d31e74c66" 00:07:56.493 ], 00:07:56.493 "product_name": "Raid Volume", 00:07:56.493 "block_size": 512, 00:07:56.493 "num_blocks": 63488, 00:07:56.493 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:56.493 "assigned_rate_limits": { 00:07:56.493 "rw_ios_per_sec": 0, 00:07:56.493 "rw_mbytes_per_sec": 0, 00:07:56.493 "r_mbytes_per_sec": 0, 00:07:56.493 "w_mbytes_per_sec": 0 00:07:56.493 }, 00:07:56.493 "claimed": false, 00:07:56.493 "zoned": false, 00:07:56.493 "supported_io_types": { 00:07:56.493 "read": true, 00:07:56.493 "write": true, 00:07:56.493 "unmap": false, 00:07:56.493 "flush": false, 00:07:56.493 "reset": true, 00:07:56.493 "nvme_admin": false, 00:07:56.493 "nvme_io": false, 00:07:56.493 "nvme_io_md": false, 00:07:56.493 "write_zeroes": true, 00:07:56.493 "zcopy": false, 00:07:56.493 "get_zone_info": false, 00:07:56.493 "zone_management": false, 00:07:56.493 "zone_append": false, 00:07:56.493 "compare": false, 00:07:56.493 "compare_and_write": false, 00:07:56.493 "abort": false, 00:07:56.493 "seek_hole": false, 00:07:56.493 "seek_data": false, 00:07:56.493 "copy": false, 00:07:56.493 "nvme_iov_md": false 00:07:56.493 }, 00:07:56.493 "memory_domains": [ 00:07:56.493 { 00:07:56.493 "dma_device_id": "system", 00:07:56.493 "dma_device_type": 1 00:07:56.493 }, 00:07:56.493 { 00:07:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.493 "dma_device_type": 2 00:07:56.493 }, 00:07:56.493 { 00:07:56.493 "dma_device_id": "system", 00:07:56.493 "dma_device_type": 1 00:07:56.493 }, 00:07:56.493 { 00:07:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.493 "dma_device_type": 2 00:07:56.493 } 00:07:56.493 ], 00:07:56.493 "driver_specific": { 00:07:56.493 "raid": { 00:07:56.493 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:56.493 "strip_size_kb": 0, 00:07:56.493 "state": "online", 00:07:56.493 "raid_level": "raid1", 00:07:56.493 "superblock": true, 00:07:56.493 "num_base_bdevs": 2, 00:07:56.493 "num_base_bdevs_discovered": 2, 00:07:56.493 "num_base_bdevs_operational": 2, 00:07:56.493 "base_bdevs_list": [ 00:07:56.493 { 00:07:56.493 "name": "pt1", 00:07:56.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.493 "is_configured": true, 00:07:56.493 "data_offset": 2048, 00:07:56.493 "data_size": 63488 00:07:56.493 }, 00:07:56.493 { 00:07:56.493 "name": "pt2", 00:07:56.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.493 "is_configured": true, 00:07:56.493 "data_offset": 2048, 00:07:56.493 "data_size": 63488 00:07:56.493 } 00:07:56.493 ] 00:07:56.493 } 00:07:56.493 } 00:07:56.493 }' 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.493 pt2' 00:07:56.493 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.752 [2024-11-26 21:15:14.800096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3eef1184-a26e-4636-850d-ea2d31e74c66 '!=' 3eef1184-a26e-4636-850d-ea2d31e74c66 ']' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.752 [2024-11-26 21:15:14.835824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.752 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.752 "name": "raid_bdev1", 00:07:56.752 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:56.752 "strip_size_kb": 0, 00:07:56.752 "state": "online", 00:07:56.752 "raid_level": "raid1", 00:07:56.752 "superblock": true, 00:07:56.752 "num_base_bdevs": 2, 00:07:56.752 "num_base_bdevs_discovered": 1, 00:07:56.752 "num_base_bdevs_operational": 1, 00:07:56.752 "base_bdevs_list": [ 00:07:56.752 { 00:07:56.752 "name": null, 00:07:56.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.752 "is_configured": false, 00:07:56.752 "data_offset": 0, 00:07:56.752 "data_size": 63488 00:07:56.752 }, 00:07:56.752 { 00:07:56.752 "name": "pt2", 00:07:56.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.752 "is_configured": true, 00:07:56.752 "data_offset": 2048, 00:07:56.752 "data_size": 63488 00:07:56.752 } 00:07:56.753 ] 00:07:56.753 }' 00:07:56.753 21:15:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.753 21:15:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 [2024-11-26 21:15:15.275354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.319 [2024-11-26 21:15:15.275464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.319 [2024-11-26 21:15:15.275635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.319 [2024-11-26 21:15:15.275766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.319 [2024-11-26 21:15:15.275856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 [2024-11-26 21:15:15.343190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.319 [2024-11-26 21:15:15.343312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.319 [2024-11-26 21:15:15.343363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:57.319 [2024-11-26 21:15:15.343405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.319 [2024-11-26 21:15:15.345968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.319 [2024-11-26 21:15:15.346082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.319 [2024-11-26 21:15:15.346224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.319 [2024-11-26 21:15:15.346320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.319 [2024-11-26 21:15:15.346487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:57.319 [2024-11-26 21:15:15.346541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.319 [2024-11-26 21:15:15.346864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:57.319 [2024-11-26 21:15:15.347107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:57.319 [2024-11-26 21:15:15.347174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:57.319 [2024-11-26 21:15:15.347436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.319 pt2 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.319 "name": "raid_bdev1", 00:07:57.319 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:57.319 "strip_size_kb": 0, 00:07:57.319 "state": "online", 00:07:57.319 "raid_level": "raid1", 00:07:57.319 "superblock": true, 00:07:57.319 "num_base_bdevs": 2, 00:07:57.319 "num_base_bdevs_discovered": 1, 00:07:57.319 "num_base_bdevs_operational": 1, 00:07:57.319 "base_bdevs_list": [ 00:07:57.319 { 00:07:57.319 "name": null, 00:07:57.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.319 "is_configured": false, 00:07:57.319 "data_offset": 2048, 00:07:57.319 "data_size": 63488 00:07:57.319 }, 00:07:57.319 { 00:07:57.319 "name": "pt2", 00:07:57.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.319 "is_configured": true, 00:07:57.319 "data_offset": 2048, 00:07:57.319 "data_size": 63488 00:07:57.319 } 00:07:57.319 ] 00:07:57.319 }' 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.319 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 [2024-11-26 21:15:15.762679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.886 [2024-11-26 21:15:15.762771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.886 [2024-11-26 21:15:15.762901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.886 [2024-11-26 21:15:15.763051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.886 [2024-11-26 21:15:15.763153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 [2024-11-26 21:15:15.818637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.886 [2024-11-26 21:15:15.818756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.886 [2024-11-26 21:15:15.818809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:57.886 [2024-11-26 21:15:15.818841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.886 [2024-11-26 21:15:15.821437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.886 [2024-11-26 21:15:15.821528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.886 [2024-11-26 21:15:15.821680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.886 [2024-11-26 21:15:15.821784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.886 [2024-11-26 21:15:15.822081] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:57.886 [2024-11-26 21:15:15.822168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.886 [2024-11-26 21:15:15.822218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:57.886 [2024-11-26 21:15:15.822333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.886 [2024-11-26 21:15:15.822465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:57.886 [2024-11-26 21:15:15.822511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.886 [2024-11-26 21:15:15.822899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:57.886 [2024-11-26 21:15:15.823147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:57.886 [2024-11-26 21:15:15.823206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:57.886 [2024-11-26 21:15:15.823508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.886 pt1 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.886 "name": "raid_bdev1", 00:07:57.886 "uuid": "3eef1184-a26e-4636-850d-ea2d31e74c66", 00:07:57.886 "strip_size_kb": 0, 00:07:57.886 "state": "online", 00:07:57.886 "raid_level": "raid1", 00:07:57.886 "superblock": true, 00:07:57.886 "num_base_bdevs": 2, 00:07:57.886 "num_base_bdevs_discovered": 1, 00:07:57.886 "num_base_bdevs_operational": 1, 00:07:57.886 "base_bdevs_list": [ 00:07:57.886 { 00:07:57.886 "name": null, 00:07:57.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.886 "is_configured": false, 00:07:57.886 "data_offset": 2048, 00:07:57.886 "data_size": 63488 00:07:57.886 }, 00:07:57.886 { 00:07:57.886 "name": "pt2", 00:07:57.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.886 "is_configured": true, 00:07:57.886 "data_offset": 2048, 00:07:57.886 "data_size": 63488 00:07:57.886 } 00:07:57.886 ] 00:07:57.886 }' 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.886 21:15:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.145 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:58.145 [2024-11-26 21:15:16.294993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3eef1184-a26e-4636-850d-ea2d31e74c66 '!=' 3eef1184-a26e-4636-850d-ea2d31e74c66 ']' 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63059 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63059 ']' 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63059 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63059 00:07:58.403 killing process with pid 63059 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63059' 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63059 00:07:58.403 [2024-11-26 21:15:16.354712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.403 21:15:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63059 00:07:58.403 [2024-11-26 21:15:16.354818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.403 [2024-11-26 21:15:16.354880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.403 [2024-11-26 21:15:16.354896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:58.661 [2024-11-26 21:15:16.577826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.039 21:15:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:00.039 00:08:00.039 real 0m6.222s 00:08:00.039 user 0m9.375s 00:08:00.039 sys 0m0.962s 00:08:00.039 ************************************ 00:08:00.039 END TEST raid_superblock_test 00:08:00.039 ************************************ 00:08:00.039 21:15:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.039 21:15:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.039 21:15:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:00.039 21:15:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.039 21:15:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.039 21:15:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.039 ************************************ 00:08:00.039 START TEST raid_read_error_test 00:08:00.039 ************************************ 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y4r7lYAjTi 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63389 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63389 00:08:00.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63389 ']' 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.039 21:15:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.039 [2024-11-26 21:15:17.954771] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:00.039 [2024-11-26 21:15:17.954890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63389 ] 00:08:00.039 [2024-11-26 21:15:18.130953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.298 [2024-11-26 21:15:18.242811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.298 [2024-11-26 21:15:18.442284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.298 [2024-11-26 21:15:18.442336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 BaseBdev1_malloc 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 true 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 [2024-11-26 21:15:18.834293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.866 [2024-11-26 21:15:18.834352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.866 [2024-11-26 21:15:18.834371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.866 [2024-11-26 21:15:18.834381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.866 [2024-11-26 21:15:18.836440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.866 [2024-11-26 21:15:18.836481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.866 BaseBdev1 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 BaseBdev2_malloc 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 true 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 [2024-11-26 21:15:18.898330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.866 [2024-11-26 21:15:18.898391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.866 [2024-11-26 21:15:18.898406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.866 [2024-11-26 21:15:18.898417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.866 [2024-11-26 21:15:18.900491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.866 [2024-11-26 21:15:18.900533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.866 BaseBdev2 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.866 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.866 [2024-11-26 21:15:18.910366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.866 [2024-11-26 21:15:18.912195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.867 [2024-11-26 21:15:18.912387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.867 [2024-11-26 21:15:18.912402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.867 [2024-11-26 21:15:18.912625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:00.867 [2024-11-26 21:15:18.912784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.867 [2024-11-26 21:15:18.912794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.867 [2024-11-26 21:15:18.912929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.867 "name": "raid_bdev1", 00:08:00.867 "uuid": "b68c90f8-b90d-4ca7-9491-ab4f76677df0", 00:08:00.867 "strip_size_kb": 0, 00:08:00.867 "state": "online", 00:08:00.867 "raid_level": "raid1", 00:08:00.867 "superblock": true, 00:08:00.867 "num_base_bdevs": 2, 00:08:00.867 "num_base_bdevs_discovered": 2, 00:08:00.867 "num_base_bdevs_operational": 2, 00:08:00.867 "base_bdevs_list": [ 00:08:00.867 { 00:08:00.867 "name": "BaseBdev1", 00:08:00.867 "uuid": "568bc855-fcc7-5814-98c4-9525bdf7519b", 00:08:00.867 "is_configured": true, 00:08:00.867 "data_offset": 2048, 00:08:00.867 "data_size": 63488 00:08:00.867 }, 00:08:00.867 { 00:08:00.867 "name": "BaseBdev2", 00:08:00.867 "uuid": "cf1a1146-50e4-54b5-8831-073d7fc8b62a", 00:08:00.867 "is_configured": true, 00:08:00.867 "data_offset": 2048, 00:08:00.867 "data_size": 63488 00:08:00.867 } 00:08:00.867 ] 00:08:00.867 }' 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.867 21:15:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.433 21:15:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.433 21:15:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.433 [2024-11-26 21:15:19.483042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.369 "name": "raid_bdev1", 00:08:02.369 "uuid": "b68c90f8-b90d-4ca7-9491-ab4f76677df0", 00:08:02.369 "strip_size_kb": 0, 00:08:02.369 "state": "online", 00:08:02.369 "raid_level": "raid1", 00:08:02.369 "superblock": true, 00:08:02.369 "num_base_bdevs": 2, 00:08:02.369 "num_base_bdevs_discovered": 2, 00:08:02.369 "num_base_bdevs_operational": 2, 00:08:02.369 "base_bdevs_list": [ 00:08:02.369 { 00:08:02.369 "name": "BaseBdev1", 00:08:02.369 "uuid": "568bc855-fcc7-5814-98c4-9525bdf7519b", 00:08:02.369 "is_configured": true, 00:08:02.369 "data_offset": 2048, 00:08:02.369 "data_size": 63488 00:08:02.369 }, 00:08:02.369 { 00:08:02.369 "name": "BaseBdev2", 00:08:02.369 "uuid": "cf1a1146-50e4-54b5-8831-073d7fc8b62a", 00:08:02.369 "is_configured": true, 00:08:02.369 "data_offset": 2048, 00:08:02.369 "data_size": 63488 00:08:02.369 } 00:08:02.369 ] 00:08:02.369 }' 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.369 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.938 [2024-11-26 21:15:20.820798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.938 [2024-11-26 21:15:20.820912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.938 [2024-11-26 21:15:20.823736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.938 [2024-11-26 21:15:20.823840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.938 [2024-11-26 21:15:20.823955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.938 [2024-11-26 21:15:20.824034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:02.938 { 00:08:02.938 "results": [ 00:08:02.938 { 00:08:02.938 "job": "raid_bdev1", 00:08:02.938 "core_mask": "0x1", 00:08:02.938 "workload": "randrw", 00:08:02.938 "percentage": 50, 00:08:02.938 "status": "finished", 00:08:02.938 "queue_depth": 1, 00:08:02.938 "io_size": 131072, 00:08:02.938 "runtime": 1.338518, 00:08:02.938 "iops": 17937.74906276942, 00:08:02.938 "mibps": 2242.2186328461776, 00:08:02.938 "io_failed": 0, 00:08:02.938 "io_timeout": 0, 00:08:02.938 "avg_latency_us": 53.04822903120788, 00:08:02.938 "min_latency_us": 22.358078602620086, 00:08:02.938 "max_latency_us": 1352.216593886463 00:08:02.938 } 00:08:02.938 ], 00:08:02.938 "core_count": 1 00:08:02.938 } 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63389 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63389 ']' 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63389 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63389 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.938 killing process with pid 63389 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63389' 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63389 00:08:02.938 [2024-11-26 21:15:20.869679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.938 21:15:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63389 00:08:02.938 [2024-11-26 21:15:20.998733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.311 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y4r7lYAjTi 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:04.312 ************************************ 00:08:04.312 END TEST raid_read_error_test 00:08:04.312 ************************************ 00:08:04.312 00:08:04.312 real 0m4.452s 00:08:04.312 user 0m5.351s 00:08:04.312 sys 0m0.534s 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.312 21:15:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.312 21:15:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:04.312 21:15:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.312 21:15:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.312 21:15:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.312 ************************************ 00:08:04.312 START TEST raid_write_error_test 00:08:04.312 ************************************ 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k2C7QZMjXx 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63535 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63535 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63535 ']' 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.312 21:15:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.572 [2024-11-26 21:15:22.467055] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:04.572 [2024-11-26 21:15:22.467194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63535 ] 00:08:04.572 [2024-11-26 21:15:22.642463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.832 [2024-11-26 21:15:22.756241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.832 [2024-11-26 21:15:22.954987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.832 [2024-11-26 21:15:22.955024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 BaseBdev1_malloc 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 true 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 [2024-11-26 21:15:23.349444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.400 [2024-11-26 21:15:23.349497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.400 [2024-11-26 21:15:23.349515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.400 [2024-11-26 21:15:23.349525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.400 [2024-11-26 21:15:23.351503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.400 [2024-11-26 21:15:23.351543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.400 BaseBdev1 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 BaseBdev2_malloc 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 true 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 [2024-11-26 21:15:23.414073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.400 [2024-11-26 21:15:23.414144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.400 [2024-11-26 21:15:23.414164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.400 [2024-11-26 21:15:23.414174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.400 [2024-11-26 21:15:23.416305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.400 [2024-11-26 21:15:23.416348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.400 BaseBdev2 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 [2024-11-26 21:15:23.426117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.400 [2024-11-26 21:15:23.427947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.400 [2024-11-26 21:15:23.428191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.400 [2024-11-26 21:15:23.428208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.400 [2024-11-26 21:15:23.428491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.400 [2024-11-26 21:15:23.428682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.400 [2024-11-26 21:15:23.428693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.400 [2024-11-26 21:15:23.428870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.400 "name": "raid_bdev1", 00:08:05.400 "uuid": "b151ce2f-e20d-43b2-89f9-85878e80bafa", 00:08:05.400 "strip_size_kb": 0, 00:08:05.400 "state": "online", 00:08:05.400 "raid_level": "raid1", 00:08:05.400 "superblock": true, 00:08:05.400 "num_base_bdevs": 2, 00:08:05.400 "num_base_bdevs_discovered": 2, 00:08:05.400 "num_base_bdevs_operational": 2, 00:08:05.400 "base_bdevs_list": [ 00:08:05.400 { 00:08:05.400 "name": "BaseBdev1", 00:08:05.400 "uuid": "74f75eb3-3df9-5148-9a09-c36f0a6846eb", 00:08:05.400 "is_configured": true, 00:08:05.400 "data_offset": 2048, 00:08:05.400 "data_size": 63488 00:08:05.400 }, 00:08:05.400 { 00:08:05.400 "name": "BaseBdev2", 00:08:05.400 "uuid": "1406755f-9587-5098-b1ff-f7d9fcd67573", 00:08:05.400 "is_configured": true, 00:08:05.400 "data_offset": 2048, 00:08:05.400 "data_size": 63488 00:08:05.400 } 00:08:05.400 ] 00:08:05.400 }' 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.400 21:15:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.969 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:05.969 21:15:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:05.969 [2024-11-26 21:15:23.954487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.907 [2024-11-26 21:15:24.869954] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:06.907 [2024-11-26 21:15:24.870128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.907 [2024-11-26 21:15:24.870357] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.907 "name": "raid_bdev1", 00:08:06.907 "uuid": "b151ce2f-e20d-43b2-89f9-85878e80bafa", 00:08:06.907 "strip_size_kb": 0, 00:08:06.907 "state": "online", 00:08:06.907 "raid_level": "raid1", 00:08:06.907 "superblock": true, 00:08:06.907 "num_base_bdevs": 2, 00:08:06.907 "num_base_bdevs_discovered": 1, 00:08:06.907 "num_base_bdevs_operational": 1, 00:08:06.907 "base_bdevs_list": [ 00:08:06.907 { 00:08:06.907 "name": null, 00:08:06.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.907 "is_configured": false, 00:08:06.907 "data_offset": 0, 00:08:06.907 "data_size": 63488 00:08:06.907 }, 00:08:06.907 { 00:08:06.907 "name": "BaseBdev2", 00:08:06.907 "uuid": "1406755f-9587-5098-b1ff-f7d9fcd67573", 00:08:06.907 "is_configured": true, 00:08:06.907 "data_offset": 2048, 00:08:06.907 "data_size": 63488 00:08:06.907 } 00:08:06.907 ] 00:08:06.907 }' 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.907 21:15:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.165 [2024-11-26 21:15:25.262716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.165 [2024-11-26 21:15:25.262748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.165 { 00:08:07.165 "results": [ 00:08:07.165 { 00:08:07.165 "job": "raid_bdev1", 00:08:07.165 "core_mask": "0x1", 00:08:07.165 "workload": "randrw", 00:08:07.165 "percentage": 50, 00:08:07.165 "status": "finished", 00:08:07.165 "queue_depth": 1, 00:08:07.165 "io_size": 131072, 00:08:07.165 "runtime": 1.308839, 00:08:07.165 "iops": 22069.941375524417, 00:08:07.165 "mibps": 2758.742671940552, 00:08:07.165 "io_failed": 0, 00:08:07.165 "io_timeout": 0, 00:08:07.165 "avg_latency_us": 42.72547472416036, 00:08:07.165 "min_latency_us": 21.910917030567685, 00:08:07.165 "max_latency_us": 1445.2262008733624 00:08:07.165 } 00:08:07.165 ], 00:08:07.165 "core_count": 1 00:08:07.165 } 00:08:07.165 [2024-11-26 21:15:25.265337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.165 [2024-11-26 21:15:25.265378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.165 [2024-11-26 21:15:25.265435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.165 [2024-11-26 21:15:25.265447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63535 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63535 ']' 00:08:07.165 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63535 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63535 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63535' 00:08:07.166 killing process with pid 63535 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63535 00:08:07.166 [2024-11-26 21:15:25.314558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.166 21:15:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63535 00:08:07.425 [2024-11-26 21:15:25.446505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k2C7QZMjXx 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:08.803 00:08:08.803 real 0m4.366s 00:08:08.803 user 0m5.175s 00:08:08.803 sys 0m0.503s 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.803 21:15:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.803 ************************************ 00:08:08.803 END TEST raid_write_error_test 00:08:08.803 ************************************ 00:08:08.803 21:15:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:08.803 21:15:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.803 21:15:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:08.803 21:15:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.803 21:15:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.803 21:15:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.803 ************************************ 00:08:08.803 START TEST raid_state_function_test 00:08:08.803 ************************************ 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63673 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63673' 00:08:08.803 Process raid pid: 63673 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63673 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63673 ']' 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.803 21:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.803 [2024-11-26 21:15:26.908856] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:08.803 [2024-11-26 21:15:26.909111] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.063 [2024-11-26 21:15:27.088439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.063 [2024-11-26 21:15:27.204619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.333 [2024-11-26 21:15:27.403733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.333 [2024-11-26 21:15:27.403857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.911 [2024-11-26 21:15:27.858189] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.911 [2024-11-26 21:15:27.858316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.911 [2024-11-26 21:15:27.858362] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.911 [2024-11-26 21:15:27.858403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.911 [2024-11-26 21:15:27.858439] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.911 [2024-11-26 21:15:27.858477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.911 "name": "Existed_Raid", 00:08:09.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.911 "strip_size_kb": 64, 00:08:09.911 "state": "configuring", 00:08:09.911 "raid_level": "raid0", 00:08:09.911 "superblock": false, 00:08:09.911 "num_base_bdevs": 3, 00:08:09.911 "num_base_bdevs_discovered": 0, 00:08:09.911 "num_base_bdevs_operational": 3, 00:08:09.911 "base_bdevs_list": [ 00:08:09.911 { 00:08:09.911 "name": "BaseBdev1", 00:08:09.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.911 "is_configured": false, 00:08:09.911 "data_offset": 0, 00:08:09.911 "data_size": 0 00:08:09.911 }, 00:08:09.911 { 00:08:09.911 "name": "BaseBdev2", 00:08:09.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.911 "is_configured": false, 00:08:09.911 "data_offset": 0, 00:08:09.911 "data_size": 0 00:08:09.911 }, 00:08:09.911 { 00:08:09.911 "name": "BaseBdev3", 00:08:09.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.911 "is_configured": false, 00:08:09.911 "data_offset": 0, 00:08:09.911 "data_size": 0 00:08:09.911 } 00:08:09.911 ] 00:08:09.911 }' 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.911 21:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 [2024-11-26 21:15:28.234151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.169 [2024-11-26 21:15:28.234195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 [2024-11-26 21:15:28.242151] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.169 [2024-11-26 21:15:28.242204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.169 [2024-11-26 21:15:28.242217] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.169 [2024-11-26 21:15:28.242230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.169 [2024-11-26 21:15:28.242239] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.169 [2024-11-26 21:15:28.242253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 [2024-11-26 21:15:28.288219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.169 BaseBdev1 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 [ 00:08:10.169 { 00:08:10.169 "name": "BaseBdev1", 00:08:10.169 "aliases": [ 00:08:10.169 "eb81f5f5-276d-44bc-a533-b6dc09321524" 00:08:10.169 ], 00:08:10.169 "product_name": "Malloc disk", 00:08:10.169 "block_size": 512, 00:08:10.169 "num_blocks": 65536, 00:08:10.169 "uuid": "eb81f5f5-276d-44bc-a533-b6dc09321524", 00:08:10.169 "assigned_rate_limits": { 00:08:10.169 "rw_ios_per_sec": 0, 00:08:10.169 "rw_mbytes_per_sec": 0, 00:08:10.169 "r_mbytes_per_sec": 0, 00:08:10.169 "w_mbytes_per_sec": 0 00:08:10.169 }, 00:08:10.169 "claimed": true, 00:08:10.169 "claim_type": "exclusive_write", 00:08:10.169 "zoned": false, 00:08:10.169 "supported_io_types": { 00:08:10.169 "read": true, 00:08:10.169 "write": true, 00:08:10.169 "unmap": true, 00:08:10.169 "flush": true, 00:08:10.169 "reset": true, 00:08:10.169 "nvme_admin": false, 00:08:10.169 "nvme_io": false, 00:08:10.169 "nvme_io_md": false, 00:08:10.169 "write_zeroes": true, 00:08:10.169 "zcopy": true, 00:08:10.169 "get_zone_info": false, 00:08:10.169 "zone_management": false, 00:08:10.169 "zone_append": false, 00:08:10.169 "compare": false, 00:08:10.169 "compare_and_write": false, 00:08:10.169 "abort": true, 00:08:10.169 "seek_hole": false, 00:08:10.169 "seek_data": false, 00:08:10.169 "copy": true, 00:08:10.169 "nvme_iov_md": false 00:08:10.169 }, 00:08:10.169 "memory_domains": [ 00:08:10.169 { 00:08:10.169 "dma_device_id": "system", 00:08:10.169 "dma_device_type": 1 00:08:10.169 }, 00:08:10.169 { 00:08:10.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.169 "dma_device_type": 2 00:08:10.169 } 00:08:10.169 ], 00:08:10.169 "driver_specific": {} 00:08:10.169 } 00:08:10.169 ] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.169 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.170 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.427 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.427 "name": "Existed_Raid", 00:08:10.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.427 "strip_size_kb": 64, 00:08:10.427 "state": "configuring", 00:08:10.427 "raid_level": "raid0", 00:08:10.427 "superblock": false, 00:08:10.427 "num_base_bdevs": 3, 00:08:10.427 "num_base_bdevs_discovered": 1, 00:08:10.427 "num_base_bdevs_operational": 3, 00:08:10.427 "base_bdevs_list": [ 00:08:10.427 { 00:08:10.427 "name": "BaseBdev1", 00:08:10.427 "uuid": "eb81f5f5-276d-44bc-a533-b6dc09321524", 00:08:10.427 "is_configured": true, 00:08:10.427 "data_offset": 0, 00:08:10.427 "data_size": 65536 00:08:10.427 }, 00:08:10.427 { 00:08:10.427 "name": "BaseBdev2", 00:08:10.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.427 "is_configured": false, 00:08:10.427 "data_offset": 0, 00:08:10.427 "data_size": 0 00:08:10.427 }, 00:08:10.427 { 00:08:10.427 "name": "BaseBdev3", 00:08:10.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.427 "is_configured": false, 00:08:10.427 "data_offset": 0, 00:08:10.427 "data_size": 0 00:08:10.427 } 00:08:10.427 ] 00:08:10.427 }' 00:08:10.427 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.427 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.686 [2024-11-26 21:15:28.667746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.686 [2024-11-26 21:15:28.667856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.686 [2024-11-26 21:15:28.679775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.686 [2024-11-26 21:15:28.681598] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.686 [2024-11-26 21:15:28.681639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.686 [2024-11-26 21:15:28.681649] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.686 [2024-11-26 21:15:28.681658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.686 "name": "Existed_Raid", 00:08:10.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.686 "strip_size_kb": 64, 00:08:10.686 "state": "configuring", 00:08:10.686 "raid_level": "raid0", 00:08:10.686 "superblock": false, 00:08:10.686 "num_base_bdevs": 3, 00:08:10.686 "num_base_bdevs_discovered": 1, 00:08:10.686 "num_base_bdevs_operational": 3, 00:08:10.686 "base_bdevs_list": [ 00:08:10.686 { 00:08:10.686 "name": "BaseBdev1", 00:08:10.686 "uuid": "eb81f5f5-276d-44bc-a533-b6dc09321524", 00:08:10.686 "is_configured": true, 00:08:10.686 "data_offset": 0, 00:08:10.686 "data_size": 65536 00:08:10.686 }, 00:08:10.686 { 00:08:10.686 "name": "BaseBdev2", 00:08:10.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.686 "is_configured": false, 00:08:10.686 "data_offset": 0, 00:08:10.686 "data_size": 0 00:08:10.686 }, 00:08:10.686 { 00:08:10.686 "name": "BaseBdev3", 00:08:10.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.686 "is_configured": false, 00:08:10.686 "data_offset": 0, 00:08:10.686 "data_size": 0 00:08:10.686 } 00:08:10.686 ] 00:08:10.686 }' 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.686 21:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.946 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.946 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.946 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.205 [2024-11-26 21:15:29.121501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.205 BaseBdev2 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.205 [ 00:08:11.205 { 00:08:11.205 "name": "BaseBdev2", 00:08:11.205 "aliases": [ 00:08:11.205 "33587a55-61b3-44f6-9a22-04864fc2f944" 00:08:11.205 ], 00:08:11.205 "product_name": "Malloc disk", 00:08:11.205 "block_size": 512, 00:08:11.205 "num_blocks": 65536, 00:08:11.205 "uuid": "33587a55-61b3-44f6-9a22-04864fc2f944", 00:08:11.205 "assigned_rate_limits": { 00:08:11.205 "rw_ios_per_sec": 0, 00:08:11.205 "rw_mbytes_per_sec": 0, 00:08:11.205 "r_mbytes_per_sec": 0, 00:08:11.205 "w_mbytes_per_sec": 0 00:08:11.205 }, 00:08:11.205 "claimed": true, 00:08:11.205 "claim_type": "exclusive_write", 00:08:11.205 "zoned": false, 00:08:11.205 "supported_io_types": { 00:08:11.205 "read": true, 00:08:11.205 "write": true, 00:08:11.205 "unmap": true, 00:08:11.205 "flush": true, 00:08:11.205 "reset": true, 00:08:11.205 "nvme_admin": false, 00:08:11.205 "nvme_io": false, 00:08:11.205 "nvme_io_md": false, 00:08:11.205 "write_zeroes": true, 00:08:11.205 "zcopy": true, 00:08:11.205 "get_zone_info": false, 00:08:11.205 "zone_management": false, 00:08:11.205 "zone_append": false, 00:08:11.205 "compare": false, 00:08:11.205 "compare_and_write": false, 00:08:11.205 "abort": true, 00:08:11.205 "seek_hole": false, 00:08:11.205 "seek_data": false, 00:08:11.205 "copy": true, 00:08:11.205 "nvme_iov_md": false 00:08:11.205 }, 00:08:11.205 "memory_domains": [ 00:08:11.205 { 00:08:11.205 "dma_device_id": "system", 00:08:11.205 "dma_device_type": 1 00:08:11.205 }, 00:08:11.205 { 00:08:11.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.205 "dma_device_type": 2 00:08:11.205 } 00:08:11.205 ], 00:08:11.205 "driver_specific": {} 00:08:11.205 } 00:08:11.205 ] 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.205 "name": "Existed_Raid", 00:08:11.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.205 "strip_size_kb": 64, 00:08:11.205 "state": "configuring", 00:08:11.205 "raid_level": "raid0", 00:08:11.205 "superblock": false, 00:08:11.205 "num_base_bdevs": 3, 00:08:11.205 "num_base_bdevs_discovered": 2, 00:08:11.205 "num_base_bdevs_operational": 3, 00:08:11.205 "base_bdevs_list": [ 00:08:11.205 { 00:08:11.205 "name": "BaseBdev1", 00:08:11.205 "uuid": "eb81f5f5-276d-44bc-a533-b6dc09321524", 00:08:11.205 "is_configured": true, 00:08:11.205 "data_offset": 0, 00:08:11.205 "data_size": 65536 00:08:11.205 }, 00:08:11.205 { 00:08:11.205 "name": "BaseBdev2", 00:08:11.205 "uuid": "33587a55-61b3-44f6-9a22-04864fc2f944", 00:08:11.205 "is_configured": true, 00:08:11.205 "data_offset": 0, 00:08:11.205 "data_size": 65536 00:08:11.205 }, 00:08:11.205 { 00:08:11.205 "name": "BaseBdev3", 00:08:11.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.205 "is_configured": false, 00:08:11.205 "data_offset": 0, 00:08:11.205 "data_size": 0 00:08:11.205 } 00:08:11.205 ] 00:08:11.205 }' 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.205 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.464 [2024-11-26 21:15:29.613123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.464 [2024-11-26 21:15:29.613169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.464 [2024-11-26 21:15:29.613182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.464 [2024-11-26 21:15:29.613440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:11.464 [2024-11-26 21:15:29.613612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.464 [2024-11-26 21:15:29.613624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:11.464 [2024-11-26 21:15:29.613927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.464 BaseBdev3 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.464 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.723 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.723 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.723 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.723 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.723 [ 00:08:11.723 { 00:08:11.723 "name": "BaseBdev3", 00:08:11.723 "aliases": [ 00:08:11.723 "f299b209-1bb3-4049-b1e0-c1832427b032" 00:08:11.723 ], 00:08:11.723 "product_name": "Malloc disk", 00:08:11.723 "block_size": 512, 00:08:11.723 "num_blocks": 65536, 00:08:11.723 "uuid": "f299b209-1bb3-4049-b1e0-c1832427b032", 00:08:11.723 "assigned_rate_limits": { 00:08:11.723 "rw_ios_per_sec": 0, 00:08:11.723 "rw_mbytes_per_sec": 0, 00:08:11.723 "r_mbytes_per_sec": 0, 00:08:11.723 "w_mbytes_per_sec": 0 00:08:11.723 }, 00:08:11.723 "claimed": true, 00:08:11.723 "claim_type": "exclusive_write", 00:08:11.723 "zoned": false, 00:08:11.723 "supported_io_types": { 00:08:11.723 "read": true, 00:08:11.723 "write": true, 00:08:11.723 "unmap": true, 00:08:11.723 "flush": true, 00:08:11.723 "reset": true, 00:08:11.723 "nvme_admin": false, 00:08:11.723 "nvme_io": false, 00:08:11.723 "nvme_io_md": false, 00:08:11.723 "write_zeroes": true, 00:08:11.723 "zcopy": true, 00:08:11.723 "get_zone_info": false, 00:08:11.723 "zone_management": false, 00:08:11.723 "zone_append": false, 00:08:11.723 "compare": false, 00:08:11.723 "compare_and_write": false, 00:08:11.723 "abort": true, 00:08:11.723 "seek_hole": false, 00:08:11.723 "seek_data": false, 00:08:11.723 "copy": true, 00:08:11.723 "nvme_iov_md": false 00:08:11.723 }, 00:08:11.723 "memory_domains": [ 00:08:11.723 { 00:08:11.723 "dma_device_id": "system", 00:08:11.723 "dma_device_type": 1 00:08:11.723 }, 00:08:11.723 { 00:08:11.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.723 "dma_device_type": 2 00:08:11.723 } 00:08:11.723 ], 00:08:11.724 "driver_specific": {} 00:08:11.724 } 00:08:11.724 ] 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.724 "name": "Existed_Raid", 00:08:11.724 "uuid": "a6ebf4de-60cf-4e5c-a672-5bd6b294dbc4", 00:08:11.724 "strip_size_kb": 64, 00:08:11.724 "state": "online", 00:08:11.724 "raid_level": "raid0", 00:08:11.724 "superblock": false, 00:08:11.724 "num_base_bdevs": 3, 00:08:11.724 "num_base_bdevs_discovered": 3, 00:08:11.724 "num_base_bdevs_operational": 3, 00:08:11.724 "base_bdevs_list": [ 00:08:11.724 { 00:08:11.724 "name": "BaseBdev1", 00:08:11.724 "uuid": "eb81f5f5-276d-44bc-a533-b6dc09321524", 00:08:11.724 "is_configured": true, 00:08:11.724 "data_offset": 0, 00:08:11.724 "data_size": 65536 00:08:11.724 }, 00:08:11.724 { 00:08:11.724 "name": "BaseBdev2", 00:08:11.724 "uuid": "33587a55-61b3-44f6-9a22-04864fc2f944", 00:08:11.724 "is_configured": true, 00:08:11.724 "data_offset": 0, 00:08:11.724 "data_size": 65536 00:08:11.724 }, 00:08:11.724 { 00:08:11.724 "name": "BaseBdev3", 00:08:11.724 "uuid": "f299b209-1bb3-4049-b1e0-c1832427b032", 00:08:11.724 "is_configured": true, 00:08:11.724 "data_offset": 0, 00:08:11.724 "data_size": 65536 00:08:11.724 } 00:08:11.724 ] 00:08:11.724 }' 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.724 21:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.983 [2024-11-26 21:15:30.108631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.983 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.242 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.242 "name": "Existed_Raid", 00:08:12.242 "aliases": [ 00:08:12.242 "a6ebf4de-60cf-4e5c-a672-5bd6b294dbc4" 00:08:12.242 ], 00:08:12.242 "product_name": "Raid Volume", 00:08:12.242 "block_size": 512, 00:08:12.242 "num_blocks": 196608, 00:08:12.242 "uuid": "a6ebf4de-60cf-4e5c-a672-5bd6b294dbc4", 00:08:12.242 "assigned_rate_limits": { 00:08:12.242 "rw_ios_per_sec": 0, 00:08:12.242 "rw_mbytes_per_sec": 0, 00:08:12.242 "r_mbytes_per_sec": 0, 00:08:12.242 "w_mbytes_per_sec": 0 00:08:12.242 }, 00:08:12.242 "claimed": false, 00:08:12.242 "zoned": false, 00:08:12.242 "supported_io_types": { 00:08:12.242 "read": true, 00:08:12.242 "write": true, 00:08:12.242 "unmap": true, 00:08:12.242 "flush": true, 00:08:12.242 "reset": true, 00:08:12.242 "nvme_admin": false, 00:08:12.242 "nvme_io": false, 00:08:12.242 "nvme_io_md": false, 00:08:12.242 "write_zeroes": true, 00:08:12.242 "zcopy": false, 00:08:12.242 "get_zone_info": false, 00:08:12.242 "zone_management": false, 00:08:12.242 "zone_append": false, 00:08:12.242 "compare": false, 00:08:12.242 "compare_and_write": false, 00:08:12.242 "abort": false, 00:08:12.242 "seek_hole": false, 00:08:12.242 "seek_data": false, 00:08:12.242 "copy": false, 00:08:12.242 "nvme_iov_md": false 00:08:12.242 }, 00:08:12.242 "memory_domains": [ 00:08:12.242 { 00:08:12.242 "dma_device_id": "system", 00:08:12.242 "dma_device_type": 1 00:08:12.242 }, 00:08:12.242 { 00:08:12.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.242 "dma_device_type": 2 00:08:12.242 }, 00:08:12.242 { 00:08:12.242 "dma_device_id": "system", 00:08:12.242 "dma_device_type": 1 00:08:12.242 }, 00:08:12.242 { 00:08:12.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.242 "dma_device_type": 2 00:08:12.242 }, 00:08:12.242 { 00:08:12.242 "dma_device_id": "system", 00:08:12.242 "dma_device_type": 1 00:08:12.242 }, 00:08:12.242 { 00:08:12.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.242 "dma_device_type": 2 00:08:12.242 } 00:08:12.242 ], 00:08:12.242 "driver_specific": { 00:08:12.242 "raid": { 00:08:12.242 "uuid": "a6ebf4de-60cf-4e5c-a672-5bd6b294dbc4", 00:08:12.242 "strip_size_kb": 64, 00:08:12.242 "state": "online", 00:08:12.242 "raid_level": "raid0", 00:08:12.242 "superblock": false, 00:08:12.242 "num_base_bdevs": 3, 00:08:12.242 "num_base_bdevs_discovered": 3, 00:08:12.242 "num_base_bdevs_operational": 3, 00:08:12.242 "base_bdevs_list": [ 00:08:12.242 { 00:08:12.243 "name": "BaseBdev1", 00:08:12.243 "uuid": "eb81f5f5-276d-44bc-a533-b6dc09321524", 00:08:12.243 "is_configured": true, 00:08:12.243 "data_offset": 0, 00:08:12.243 "data_size": 65536 00:08:12.243 }, 00:08:12.243 { 00:08:12.243 "name": "BaseBdev2", 00:08:12.243 "uuid": "33587a55-61b3-44f6-9a22-04864fc2f944", 00:08:12.243 "is_configured": true, 00:08:12.243 "data_offset": 0, 00:08:12.243 "data_size": 65536 00:08:12.243 }, 00:08:12.243 { 00:08:12.243 "name": "BaseBdev3", 00:08:12.243 "uuid": "f299b209-1bb3-4049-b1e0-c1832427b032", 00:08:12.243 "is_configured": true, 00:08:12.243 "data_offset": 0, 00:08:12.243 "data_size": 65536 00:08:12.243 } 00:08:12.243 ] 00:08:12.243 } 00:08:12.243 } 00:08:12.243 }' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.243 BaseBdev2 00:08:12.243 BaseBdev3' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.243 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.243 [2024-11-26 21:15:30.371898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.243 [2024-11-26 21:15:30.371927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.243 [2024-11-26 21:15:30.372000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.504 "name": "Existed_Raid", 00:08:12.504 "uuid": "a6ebf4de-60cf-4e5c-a672-5bd6b294dbc4", 00:08:12.504 "strip_size_kb": 64, 00:08:12.504 "state": "offline", 00:08:12.504 "raid_level": "raid0", 00:08:12.504 "superblock": false, 00:08:12.504 "num_base_bdevs": 3, 00:08:12.504 "num_base_bdevs_discovered": 2, 00:08:12.504 "num_base_bdevs_operational": 2, 00:08:12.504 "base_bdevs_list": [ 00:08:12.504 { 00:08:12.504 "name": null, 00:08:12.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.504 "is_configured": false, 00:08:12.504 "data_offset": 0, 00:08:12.504 "data_size": 65536 00:08:12.504 }, 00:08:12.504 { 00:08:12.504 "name": "BaseBdev2", 00:08:12.504 "uuid": "33587a55-61b3-44f6-9a22-04864fc2f944", 00:08:12.504 "is_configured": true, 00:08:12.504 "data_offset": 0, 00:08:12.504 "data_size": 65536 00:08:12.504 }, 00:08:12.504 { 00:08:12.504 "name": "BaseBdev3", 00:08:12.504 "uuid": "f299b209-1bb3-4049-b1e0-c1832427b032", 00:08:12.504 "is_configured": true, 00:08:12.504 "data_offset": 0, 00:08:12.504 "data_size": 65536 00:08:12.504 } 00:08:12.504 ] 00:08:12.504 }' 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.504 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.764 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.024 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.024 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.024 21:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.024 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.024 21:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.024 [2024-11-26 21:15:30.928418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.024 [2024-11-26 21:15:31.075220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.024 [2024-11-26 21:15:31.075315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.024 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.284 BaseBdev2 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.284 [ 00:08:13.284 { 00:08:13.284 "name": "BaseBdev2", 00:08:13.284 "aliases": [ 00:08:13.284 "29714f68-eeb4-431b-afe6-38866356446a" 00:08:13.284 ], 00:08:13.284 "product_name": "Malloc disk", 00:08:13.284 "block_size": 512, 00:08:13.284 "num_blocks": 65536, 00:08:13.284 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:13.284 "assigned_rate_limits": { 00:08:13.284 "rw_ios_per_sec": 0, 00:08:13.284 "rw_mbytes_per_sec": 0, 00:08:13.284 "r_mbytes_per_sec": 0, 00:08:13.284 "w_mbytes_per_sec": 0 00:08:13.284 }, 00:08:13.284 "claimed": false, 00:08:13.284 "zoned": false, 00:08:13.284 "supported_io_types": { 00:08:13.284 "read": true, 00:08:13.284 "write": true, 00:08:13.284 "unmap": true, 00:08:13.284 "flush": true, 00:08:13.284 "reset": true, 00:08:13.284 "nvme_admin": false, 00:08:13.284 "nvme_io": false, 00:08:13.284 "nvme_io_md": false, 00:08:13.284 "write_zeroes": true, 00:08:13.284 "zcopy": true, 00:08:13.284 "get_zone_info": false, 00:08:13.284 "zone_management": false, 00:08:13.284 "zone_append": false, 00:08:13.284 "compare": false, 00:08:13.284 "compare_and_write": false, 00:08:13.284 "abort": true, 00:08:13.284 "seek_hole": false, 00:08:13.284 "seek_data": false, 00:08:13.284 "copy": true, 00:08:13.284 "nvme_iov_md": false 00:08:13.284 }, 00:08:13.284 "memory_domains": [ 00:08:13.284 { 00:08:13.284 "dma_device_id": "system", 00:08:13.284 "dma_device_type": 1 00:08:13.284 }, 00:08:13.284 { 00:08:13.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.284 "dma_device_type": 2 00:08:13.284 } 00:08:13.284 ], 00:08:13.284 "driver_specific": {} 00:08:13.284 } 00:08:13.284 ] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.284 BaseBdev3 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.284 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 [ 00:08:13.285 { 00:08:13.285 "name": "BaseBdev3", 00:08:13.285 "aliases": [ 00:08:13.285 "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15" 00:08:13.285 ], 00:08:13.285 "product_name": "Malloc disk", 00:08:13.285 "block_size": 512, 00:08:13.285 "num_blocks": 65536, 00:08:13.285 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:13.285 "assigned_rate_limits": { 00:08:13.285 "rw_ios_per_sec": 0, 00:08:13.285 "rw_mbytes_per_sec": 0, 00:08:13.285 "r_mbytes_per_sec": 0, 00:08:13.285 "w_mbytes_per_sec": 0 00:08:13.285 }, 00:08:13.285 "claimed": false, 00:08:13.285 "zoned": false, 00:08:13.285 "supported_io_types": { 00:08:13.285 "read": true, 00:08:13.285 "write": true, 00:08:13.285 "unmap": true, 00:08:13.285 "flush": true, 00:08:13.285 "reset": true, 00:08:13.285 "nvme_admin": false, 00:08:13.285 "nvme_io": false, 00:08:13.285 "nvme_io_md": false, 00:08:13.285 "write_zeroes": true, 00:08:13.285 "zcopy": true, 00:08:13.285 "get_zone_info": false, 00:08:13.285 "zone_management": false, 00:08:13.285 "zone_append": false, 00:08:13.285 "compare": false, 00:08:13.285 "compare_and_write": false, 00:08:13.285 "abort": true, 00:08:13.285 "seek_hole": false, 00:08:13.285 "seek_data": false, 00:08:13.285 "copy": true, 00:08:13.285 "nvme_iov_md": false 00:08:13.285 }, 00:08:13.285 "memory_domains": [ 00:08:13.285 { 00:08:13.285 "dma_device_id": "system", 00:08:13.285 "dma_device_type": 1 00:08:13.285 }, 00:08:13.285 { 00:08:13.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.285 "dma_device_type": 2 00:08:13.285 } 00:08:13.285 ], 00:08:13.285 "driver_specific": {} 00:08:13.285 } 00:08:13.285 ] 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 [2024-11-26 21:15:31.374090] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.285 [2024-11-26 21:15:31.374184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.285 [2024-11-26 21:15:31.374226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.285 [2024-11-26 21:15:31.376218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.285 "name": "Existed_Raid", 00:08:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.285 "strip_size_kb": 64, 00:08:13.285 "state": "configuring", 00:08:13.285 "raid_level": "raid0", 00:08:13.285 "superblock": false, 00:08:13.285 "num_base_bdevs": 3, 00:08:13.285 "num_base_bdevs_discovered": 2, 00:08:13.285 "num_base_bdevs_operational": 3, 00:08:13.285 "base_bdevs_list": [ 00:08:13.285 { 00:08:13.285 "name": "BaseBdev1", 00:08:13.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.285 "is_configured": false, 00:08:13.285 "data_offset": 0, 00:08:13.285 "data_size": 0 00:08:13.285 }, 00:08:13.285 { 00:08:13.285 "name": "BaseBdev2", 00:08:13.285 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:13.285 "is_configured": true, 00:08:13.285 "data_offset": 0, 00:08:13.285 "data_size": 65536 00:08:13.285 }, 00:08:13.285 { 00:08:13.285 "name": "BaseBdev3", 00:08:13.285 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:13.285 "is_configured": true, 00:08:13.285 "data_offset": 0, 00:08:13.285 "data_size": 65536 00:08:13.285 } 00:08:13.285 ] 00:08:13.285 }' 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.285 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.856 [2024-11-26 21:15:31.793406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.856 "name": "Existed_Raid", 00:08:13.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.856 "strip_size_kb": 64, 00:08:13.856 "state": "configuring", 00:08:13.856 "raid_level": "raid0", 00:08:13.856 "superblock": false, 00:08:13.856 "num_base_bdevs": 3, 00:08:13.856 "num_base_bdevs_discovered": 1, 00:08:13.856 "num_base_bdevs_operational": 3, 00:08:13.856 "base_bdevs_list": [ 00:08:13.856 { 00:08:13.856 "name": "BaseBdev1", 00:08:13.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.856 "is_configured": false, 00:08:13.856 "data_offset": 0, 00:08:13.856 "data_size": 0 00:08:13.856 }, 00:08:13.856 { 00:08:13.856 "name": null, 00:08:13.856 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:13.856 "is_configured": false, 00:08:13.856 "data_offset": 0, 00:08:13.856 "data_size": 65536 00:08:13.856 }, 00:08:13.856 { 00:08:13.856 "name": "BaseBdev3", 00:08:13.856 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:13.856 "is_configured": true, 00:08:13.856 "data_offset": 0, 00:08:13.856 "data_size": 65536 00:08:13.856 } 00:08:13.856 ] 00:08:13.856 }' 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.856 21:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.116 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.116 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.116 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.116 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.116 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.376 [2024-11-26 21:15:32.324675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.376 BaseBdev1 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.376 [ 00:08:14.376 { 00:08:14.376 "name": "BaseBdev1", 00:08:14.376 "aliases": [ 00:08:14.376 "062c996f-6987-4022-ad8e-4e8e77191366" 00:08:14.376 ], 00:08:14.376 "product_name": "Malloc disk", 00:08:14.376 "block_size": 512, 00:08:14.376 "num_blocks": 65536, 00:08:14.376 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:14.376 "assigned_rate_limits": { 00:08:14.376 "rw_ios_per_sec": 0, 00:08:14.376 "rw_mbytes_per_sec": 0, 00:08:14.376 "r_mbytes_per_sec": 0, 00:08:14.376 "w_mbytes_per_sec": 0 00:08:14.376 }, 00:08:14.376 "claimed": true, 00:08:14.376 "claim_type": "exclusive_write", 00:08:14.376 "zoned": false, 00:08:14.376 "supported_io_types": { 00:08:14.376 "read": true, 00:08:14.376 "write": true, 00:08:14.376 "unmap": true, 00:08:14.376 "flush": true, 00:08:14.376 "reset": true, 00:08:14.376 "nvme_admin": false, 00:08:14.376 "nvme_io": false, 00:08:14.376 "nvme_io_md": false, 00:08:14.376 "write_zeroes": true, 00:08:14.376 "zcopy": true, 00:08:14.376 "get_zone_info": false, 00:08:14.376 "zone_management": false, 00:08:14.376 "zone_append": false, 00:08:14.376 "compare": false, 00:08:14.376 "compare_and_write": false, 00:08:14.376 "abort": true, 00:08:14.376 "seek_hole": false, 00:08:14.376 "seek_data": false, 00:08:14.376 "copy": true, 00:08:14.376 "nvme_iov_md": false 00:08:14.376 }, 00:08:14.376 "memory_domains": [ 00:08:14.376 { 00:08:14.376 "dma_device_id": "system", 00:08:14.376 "dma_device_type": 1 00:08:14.376 }, 00:08:14.376 { 00:08:14.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.376 "dma_device_type": 2 00:08:14.376 } 00:08:14.376 ], 00:08:14.376 "driver_specific": {} 00:08:14.376 } 00:08:14.376 ] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.376 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.376 "name": "Existed_Raid", 00:08:14.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.376 "strip_size_kb": 64, 00:08:14.376 "state": "configuring", 00:08:14.376 "raid_level": "raid0", 00:08:14.376 "superblock": false, 00:08:14.376 "num_base_bdevs": 3, 00:08:14.376 "num_base_bdevs_discovered": 2, 00:08:14.377 "num_base_bdevs_operational": 3, 00:08:14.377 "base_bdevs_list": [ 00:08:14.377 { 00:08:14.377 "name": "BaseBdev1", 00:08:14.377 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:14.377 "is_configured": true, 00:08:14.377 "data_offset": 0, 00:08:14.377 "data_size": 65536 00:08:14.377 }, 00:08:14.377 { 00:08:14.377 "name": null, 00:08:14.377 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:14.377 "is_configured": false, 00:08:14.377 "data_offset": 0, 00:08:14.377 "data_size": 65536 00:08:14.377 }, 00:08:14.377 { 00:08:14.377 "name": "BaseBdev3", 00:08:14.377 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:14.377 "is_configured": true, 00:08:14.377 "data_offset": 0, 00:08:14.377 "data_size": 65536 00:08:14.377 } 00:08:14.377 ] 00:08:14.377 }' 00:08:14.377 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.377 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 [2024-11-26 21:15:32.847833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.947 "name": "Existed_Raid", 00:08:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.947 "strip_size_kb": 64, 00:08:14.947 "state": "configuring", 00:08:14.947 "raid_level": "raid0", 00:08:14.947 "superblock": false, 00:08:14.947 "num_base_bdevs": 3, 00:08:14.947 "num_base_bdevs_discovered": 1, 00:08:14.947 "num_base_bdevs_operational": 3, 00:08:14.947 "base_bdevs_list": [ 00:08:14.947 { 00:08:14.947 "name": "BaseBdev1", 00:08:14.947 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:14.947 "is_configured": true, 00:08:14.947 "data_offset": 0, 00:08:14.947 "data_size": 65536 00:08:14.947 }, 00:08:14.947 { 00:08:14.947 "name": null, 00:08:14.947 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:14.947 "is_configured": false, 00:08:14.947 "data_offset": 0, 00:08:14.947 "data_size": 65536 00:08:14.947 }, 00:08:14.947 { 00:08:14.947 "name": null, 00:08:14.947 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:14.947 "is_configured": false, 00:08:14.947 "data_offset": 0, 00:08:14.947 "data_size": 65536 00:08:14.947 } 00:08:14.947 ] 00:08:14.947 }' 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.947 21:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.206 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.206 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.206 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.206 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.206 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.206 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.207 [2024-11-26 21:15:33.307102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.207 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.466 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.466 "name": "Existed_Raid", 00:08:15.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.466 "strip_size_kb": 64, 00:08:15.466 "state": "configuring", 00:08:15.466 "raid_level": "raid0", 00:08:15.466 "superblock": false, 00:08:15.466 "num_base_bdevs": 3, 00:08:15.466 "num_base_bdevs_discovered": 2, 00:08:15.466 "num_base_bdevs_operational": 3, 00:08:15.466 "base_bdevs_list": [ 00:08:15.466 { 00:08:15.466 "name": "BaseBdev1", 00:08:15.466 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:15.466 "is_configured": true, 00:08:15.466 "data_offset": 0, 00:08:15.466 "data_size": 65536 00:08:15.466 }, 00:08:15.466 { 00:08:15.466 "name": null, 00:08:15.466 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:15.466 "is_configured": false, 00:08:15.466 "data_offset": 0, 00:08:15.466 "data_size": 65536 00:08:15.466 }, 00:08:15.466 { 00:08:15.466 "name": "BaseBdev3", 00:08:15.466 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:15.466 "is_configured": true, 00:08:15.466 "data_offset": 0, 00:08:15.466 "data_size": 65536 00:08:15.466 } 00:08:15.466 ] 00:08:15.466 }' 00:08:15.466 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.466 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.725 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.725 [2024-11-26 21:15:33.818262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.987 "name": "Existed_Raid", 00:08:15.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.987 "strip_size_kb": 64, 00:08:15.987 "state": "configuring", 00:08:15.987 "raid_level": "raid0", 00:08:15.987 "superblock": false, 00:08:15.987 "num_base_bdevs": 3, 00:08:15.987 "num_base_bdevs_discovered": 1, 00:08:15.987 "num_base_bdevs_operational": 3, 00:08:15.987 "base_bdevs_list": [ 00:08:15.987 { 00:08:15.987 "name": null, 00:08:15.987 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:15.987 "is_configured": false, 00:08:15.987 "data_offset": 0, 00:08:15.987 "data_size": 65536 00:08:15.987 }, 00:08:15.987 { 00:08:15.987 "name": null, 00:08:15.987 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:15.987 "is_configured": false, 00:08:15.987 "data_offset": 0, 00:08:15.987 "data_size": 65536 00:08:15.987 }, 00:08:15.987 { 00:08:15.987 "name": "BaseBdev3", 00:08:15.987 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:15.987 "is_configured": true, 00:08:15.987 "data_offset": 0, 00:08:15.987 "data_size": 65536 00:08:15.987 } 00:08:15.987 ] 00:08:15.987 }' 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.987 21:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.248 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.248 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.248 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.248 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.248 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.248 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.507 [2024-11-26 21:15:34.405622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.507 "name": "Existed_Raid", 00:08:16.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.507 "strip_size_kb": 64, 00:08:16.507 "state": "configuring", 00:08:16.507 "raid_level": "raid0", 00:08:16.507 "superblock": false, 00:08:16.507 "num_base_bdevs": 3, 00:08:16.507 "num_base_bdevs_discovered": 2, 00:08:16.507 "num_base_bdevs_operational": 3, 00:08:16.507 "base_bdevs_list": [ 00:08:16.507 { 00:08:16.507 "name": null, 00:08:16.507 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:16.507 "is_configured": false, 00:08:16.507 "data_offset": 0, 00:08:16.507 "data_size": 65536 00:08:16.507 }, 00:08:16.507 { 00:08:16.507 "name": "BaseBdev2", 00:08:16.507 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:16.507 "is_configured": true, 00:08:16.507 "data_offset": 0, 00:08:16.507 "data_size": 65536 00:08:16.507 }, 00:08:16.507 { 00:08:16.507 "name": "BaseBdev3", 00:08:16.507 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:16.507 "is_configured": true, 00:08:16.507 "data_offset": 0, 00:08:16.507 "data_size": 65536 00:08:16.507 } 00:08:16.507 ] 00:08:16.507 }' 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.507 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 062c996f-6987-4022-ad8e-4e8e77191366 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.765 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.766 [2024-11-26 21:15:34.905232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.766 [2024-11-26 21:15:34.905372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:16.766 [2024-11-26 21:15:34.905390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:16.766 [2024-11-26 21:15:34.905699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:16.766 [2024-11-26 21:15:34.905875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:16.766 [2024-11-26 21:15:34.905887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:16.766 NewBaseBdev 00:08:16.766 [2024-11-26 21:15:34.906199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.766 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 [ 00:08:17.024 { 00:08:17.024 "name": "NewBaseBdev", 00:08:17.024 "aliases": [ 00:08:17.024 "062c996f-6987-4022-ad8e-4e8e77191366" 00:08:17.024 ], 00:08:17.024 "product_name": "Malloc disk", 00:08:17.024 "block_size": 512, 00:08:17.024 "num_blocks": 65536, 00:08:17.024 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:17.024 "assigned_rate_limits": { 00:08:17.024 "rw_ios_per_sec": 0, 00:08:17.024 "rw_mbytes_per_sec": 0, 00:08:17.024 "r_mbytes_per_sec": 0, 00:08:17.024 "w_mbytes_per_sec": 0 00:08:17.024 }, 00:08:17.024 "claimed": true, 00:08:17.024 "claim_type": "exclusive_write", 00:08:17.024 "zoned": false, 00:08:17.024 "supported_io_types": { 00:08:17.024 "read": true, 00:08:17.024 "write": true, 00:08:17.024 "unmap": true, 00:08:17.024 "flush": true, 00:08:17.024 "reset": true, 00:08:17.024 "nvme_admin": false, 00:08:17.024 "nvme_io": false, 00:08:17.024 "nvme_io_md": false, 00:08:17.024 "write_zeroes": true, 00:08:17.024 "zcopy": true, 00:08:17.024 "get_zone_info": false, 00:08:17.024 "zone_management": false, 00:08:17.024 "zone_append": false, 00:08:17.024 "compare": false, 00:08:17.024 "compare_and_write": false, 00:08:17.024 "abort": true, 00:08:17.024 "seek_hole": false, 00:08:17.024 "seek_data": false, 00:08:17.024 "copy": true, 00:08:17.024 "nvme_iov_md": false 00:08:17.024 }, 00:08:17.024 "memory_domains": [ 00:08:17.024 { 00:08:17.024 "dma_device_id": "system", 00:08:17.024 "dma_device_type": 1 00:08:17.024 }, 00:08:17.024 { 00:08:17.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.024 "dma_device_type": 2 00:08:17.024 } 00:08:17.024 ], 00:08:17.024 "driver_specific": {} 00:08:17.024 } 00:08:17.024 ] 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.024 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.024 "name": "Existed_Raid", 00:08:17.024 "uuid": "6c18ed1f-cdf5-4996-9818-13dc97447cd0", 00:08:17.024 "strip_size_kb": 64, 00:08:17.024 "state": "online", 00:08:17.024 "raid_level": "raid0", 00:08:17.024 "superblock": false, 00:08:17.024 "num_base_bdevs": 3, 00:08:17.024 "num_base_bdevs_discovered": 3, 00:08:17.024 "num_base_bdevs_operational": 3, 00:08:17.024 "base_bdevs_list": [ 00:08:17.024 { 00:08:17.024 "name": "NewBaseBdev", 00:08:17.024 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:17.025 "is_configured": true, 00:08:17.025 "data_offset": 0, 00:08:17.025 "data_size": 65536 00:08:17.025 }, 00:08:17.025 { 00:08:17.025 "name": "BaseBdev2", 00:08:17.025 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:17.025 "is_configured": true, 00:08:17.025 "data_offset": 0, 00:08:17.025 "data_size": 65536 00:08:17.025 }, 00:08:17.025 { 00:08:17.025 "name": "BaseBdev3", 00:08:17.025 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:17.025 "is_configured": true, 00:08:17.025 "data_offset": 0, 00:08:17.025 "data_size": 65536 00:08:17.025 } 00:08:17.025 ] 00:08:17.025 }' 00:08:17.025 21:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.025 21:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.283 [2024-11-26 21:15:35.293181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.283 "name": "Existed_Raid", 00:08:17.283 "aliases": [ 00:08:17.283 "6c18ed1f-cdf5-4996-9818-13dc97447cd0" 00:08:17.283 ], 00:08:17.283 "product_name": "Raid Volume", 00:08:17.283 "block_size": 512, 00:08:17.283 "num_blocks": 196608, 00:08:17.283 "uuid": "6c18ed1f-cdf5-4996-9818-13dc97447cd0", 00:08:17.283 "assigned_rate_limits": { 00:08:17.283 "rw_ios_per_sec": 0, 00:08:17.283 "rw_mbytes_per_sec": 0, 00:08:17.283 "r_mbytes_per_sec": 0, 00:08:17.283 "w_mbytes_per_sec": 0 00:08:17.283 }, 00:08:17.283 "claimed": false, 00:08:17.283 "zoned": false, 00:08:17.283 "supported_io_types": { 00:08:17.283 "read": true, 00:08:17.283 "write": true, 00:08:17.283 "unmap": true, 00:08:17.283 "flush": true, 00:08:17.283 "reset": true, 00:08:17.283 "nvme_admin": false, 00:08:17.283 "nvme_io": false, 00:08:17.283 "nvme_io_md": false, 00:08:17.283 "write_zeroes": true, 00:08:17.283 "zcopy": false, 00:08:17.283 "get_zone_info": false, 00:08:17.283 "zone_management": false, 00:08:17.283 "zone_append": false, 00:08:17.283 "compare": false, 00:08:17.283 "compare_and_write": false, 00:08:17.283 "abort": false, 00:08:17.283 "seek_hole": false, 00:08:17.283 "seek_data": false, 00:08:17.283 "copy": false, 00:08:17.283 "nvme_iov_md": false 00:08:17.283 }, 00:08:17.283 "memory_domains": [ 00:08:17.283 { 00:08:17.283 "dma_device_id": "system", 00:08:17.283 "dma_device_type": 1 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.283 "dma_device_type": 2 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "dma_device_id": "system", 00:08:17.283 "dma_device_type": 1 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.283 "dma_device_type": 2 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "dma_device_id": "system", 00:08:17.283 "dma_device_type": 1 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.283 "dma_device_type": 2 00:08:17.283 } 00:08:17.283 ], 00:08:17.283 "driver_specific": { 00:08:17.283 "raid": { 00:08:17.283 "uuid": "6c18ed1f-cdf5-4996-9818-13dc97447cd0", 00:08:17.283 "strip_size_kb": 64, 00:08:17.283 "state": "online", 00:08:17.283 "raid_level": "raid0", 00:08:17.283 "superblock": false, 00:08:17.283 "num_base_bdevs": 3, 00:08:17.283 "num_base_bdevs_discovered": 3, 00:08:17.283 "num_base_bdevs_operational": 3, 00:08:17.283 "base_bdevs_list": [ 00:08:17.283 { 00:08:17.283 "name": "NewBaseBdev", 00:08:17.283 "uuid": "062c996f-6987-4022-ad8e-4e8e77191366", 00:08:17.283 "is_configured": true, 00:08:17.283 "data_offset": 0, 00:08:17.283 "data_size": 65536 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "name": "BaseBdev2", 00:08:17.283 "uuid": "29714f68-eeb4-431b-afe6-38866356446a", 00:08:17.283 "is_configured": true, 00:08:17.283 "data_offset": 0, 00:08:17.283 "data_size": 65536 00:08:17.283 }, 00:08:17.283 { 00:08:17.283 "name": "BaseBdev3", 00:08:17.283 "uuid": "e2e0b436-b4f9-4850-a1c1-a022f6fa1c15", 00:08:17.283 "is_configured": true, 00:08:17.283 "data_offset": 0, 00:08:17.283 "data_size": 65536 00:08:17.283 } 00:08:17.283 ] 00:08:17.283 } 00:08:17.283 } 00:08:17.283 }' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:17.283 BaseBdev2 00:08:17.283 BaseBdev3' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.283 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.541 [2024-11-26 21:15:35.556300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.541 [2024-11-26 21:15:35.556381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.541 [2024-11-26 21:15:35.556493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.541 [2024-11-26 21:15:35.556562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.541 [2024-11-26 21:15:35.556577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63673 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63673 ']' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63673 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63673 00:08:17.541 killing process with pid 63673 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63673' 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63673 00:08:17.541 21:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63673 00:08:17.541 [2024-11-26 21:15:35.585230] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.799 [2024-11-26 21:15:35.930948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.172 ************************************ 00:08:19.172 END TEST raid_state_function_test 00:08:19.172 ************************************ 00:08:19.172 21:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:19.172 00:08:19.172 real 0m10.362s 00:08:19.172 user 0m16.406s 00:08:19.172 sys 0m1.639s 00:08:19.172 21:15:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.172 21:15:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.173 21:15:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:19.173 21:15:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:19.173 21:15:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.173 21:15:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.173 ************************************ 00:08:19.173 START TEST raid_state_function_test_sb 00:08:19.173 ************************************ 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:19.173 Process raid pid: 64294 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64294 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64294' 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64294 00:08:19.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64294 ']' 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.173 21:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.173 [2024-11-26 21:15:37.286013] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:19.173 [2024-11-26 21:15:37.286180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.431 [2024-11-26 21:15:37.467277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.691 [2024-11-26 21:15:37.599551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.691 [2024-11-26 21:15:37.796195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.691 [2024-11-26 21:15:37.796236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 [2024-11-26 21:15:38.168870] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.261 [2024-11-26 21:15:38.168925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.261 [2024-11-26 21:15:38.168936] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.261 [2024-11-26 21:15:38.168946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.261 [2024-11-26 21:15:38.168952] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.261 [2024-11-26 21:15:38.168971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.261 "name": "Existed_Raid", 00:08:20.261 "uuid": "90e5dd26-dbbb-4d67-bf26-9b91888c902e", 00:08:20.261 "strip_size_kb": 64, 00:08:20.261 "state": "configuring", 00:08:20.261 "raid_level": "raid0", 00:08:20.261 "superblock": true, 00:08:20.261 "num_base_bdevs": 3, 00:08:20.261 "num_base_bdevs_discovered": 0, 00:08:20.261 "num_base_bdevs_operational": 3, 00:08:20.261 "base_bdevs_list": [ 00:08:20.261 { 00:08:20.261 "name": "BaseBdev1", 00:08:20.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.261 "is_configured": false, 00:08:20.261 "data_offset": 0, 00:08:20.261 "data_size": 0 00:08:20.261 }, 00:08:20.261 { 00:08:20.261 "name": "BaseBdev2", 00:08:20.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.261 "is_configured": false, 00:08:20.261 "data_offset": 0, 00:08:20.261 "data_size": 0 00:08:20.261 }, 00:08:20.261 { 00:08:20.261 "name": "BaseBdev3", 00:08:20.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.261 "is_configured": false, 00:08:20.261 "data_offset": 0, 00:08:20.261 "data_size": 0 00:08:20.261 } 00:08:20.261 ] 00:08:20.261 }' 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.261 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 [2024-11-26 21:15:38.644032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.521 [2024-11-26 21:15:38.644134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 [2024-11-26 21:15:38.656010] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.521 [2024-11-26 21:15:38.656093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.521 [2024-11-26 21:15:38.656121] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.521 [2024-11-26 21:15:38.656144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.521 [2024-11-26 21:15:38.656162] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.521 [2024-11-26 21:15:38.656183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.521 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.781 [2024-11-26 21:15:38.703420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.781 BaseBdev1 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.781 [ 00:08:20.781 { 00:08:20.781 "name": "BaseBdev1", 00:08:20.781 "aliases": [ 00:08:20.781 "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf" 00:08:20.781 ], 00:08:20.781 "product_name": "Malloc disk", 00:08:20.781 "block_size": 512, 00:08:20.781 "num_blocks": 65536, 00:08:20.781 "uuid": "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf", 00:08:20.781 "assigned_rate_limits": { 00:08:20.781 "rw_ios_per_sec": 0, 00:08:20.781 "rw_mbytes_per_sec": 0, 00:08:20.781 "r_mbytes_per_sec": 0, 00:08:20.781 "w_mbytes_per_sec": 0 00:08:20.781 }, 00:08:20.781 "claimed": true, 00:08:20.781 "claim_type": "exclusive_write", 00:08:20.781 "zoned": false, 00:08:20.781 "supported_io_types": { 00:08:20.781 "read": true, 00:08:20.781 "write": true, 00:08:20.781 "unmap": true, 00:08:20.781 "flush": true, 00:08:20.781 "reset": true, 00:08:20.781 "nvme_admin": false, 00:08:20.781 "nvme_io": false, 00:08:20.781 "nvme_io_md": false, 00:08:20.781 "write_zeroes": true, 00:08:20.781 "zcopy": true, 00:08:20.781 "get_zone_info": false, 00:08:20.781 "zone_management": false, 00:08:20.781 "zone_append": false, 00:08:20.781 "compare": false, 00:08:20.781 "compare_and_write": false, 00:08:20.781 "abort": true, 00:08:20.781 "seek_hole": false, 00:08:20.781 "seek_data": false, 00:08:20.781 "copy": true, 00:08:20.781 "nvme_iov_md": false 00:08:20.781 }, 00:08:20.781 "memory_domains": [ 00:08:20.781 { 00:08:20.781 "dma_device_id": "system", 00:08:20.781 "dma_device_type": 1 00:08:20.781 }, 00:08:20.781 { 00:08:20.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.781 "dma_device_type": 2 00:08:20.781 } 00:08:20.781 ], 00:08:20.781 "driver_specific": {} 00:08:20.781 } 00:08:20.781 ] 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.781 "name": "Existed_Raid", 00:08:20.781 "uuid": "a8c2e131-8987-46fc-97fe-d10e2a696b34", 00:08:20.781 "strip_size_kb": 64, 00:08:20.781 "state": "configuring", 00:08:20.781 "raid_level": "raid0", 00:08:20.781 "superblock": true, 00:08:20.781 "num_base_bdevs": 3, 00:08:20.781 "num_base_bdevs_discovered": 1, 00:08:20.781 "num_base_bdevs_operational": 3, 00:08:20.781 "base_bdevs_list": [ 00:08:20.781 { 00:08:20.781 "name": "BaseBdev1", 00:08:20.781 "uuid": "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf", 00:08:20.781 "is_configured": true, 00:08:20.781 "data_offset": 2048, 00:08:20.781 "data_size": 63488 00:08:20.781 }, 00:08:20.781 { 00:08:20.781 "name": "BaseBdev2", 00:08:20.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.781 "is_configured": false, 00:08:20.781 "data_offset": 0, 00:08:20.781 "data_size": 0 00:08:20.781 }, 00:08:20.781 { 00:08:20.781 "name": "BaseBdev3", 00:08:20.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.781 "is_configured": false, 00:08:20.781 "data_offset": 0, 00:08:20.781 "data_size": 0 00:08:20.781 } 00:08:20.781 ] 00:08:20.781 }' 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.781 21:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.351 [2024-11-26 21:15:39.222570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.351 [2024-11-26 21:15:39.222626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.351 [2024-11-26 21:15:39.230605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.351 [2024-11-26 21:15:39.232505] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.351 [2024-11-26 21:15:39.232597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.351 [2024-11-26 21:15:39.232628] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.351 [2024-11-26 21:15:39.232652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.351 "name": "Existed_Raid", 00:08:21.351 "uuid": "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68", 00:08:21.351 "strip_size_kb": 64, 00:08:21.351 "state": "configuring", 00:08:21.351 "raid_level": "raid0", 00:08:21.351 "superblock": true, 00:08:21.351 "num_base_bdevs": 3, 00:08:21.351 "num_base_bdevs_discovered": 1, 00:08:21.351 "num_base_bdevs_operational": 3, 00:08:21.351 "base_bdevs_list": [ 00:08:21.351 { 00:08:21.351 "name": "BaseBdev1", 00:08:21.351 "uuid": "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf", 00:08:21.351 "is_configured": true, 00:08:21.351 "data_offset": 2048, 00:08:21.351 "data_size": 63488 00:08:21.351 }, 00:08:21.351 { 00:08:21.351 "name": "BaseBdev2", 00:08:21.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.351 "is_configured": false, 00:08:21.351 "data_offset": 0, 00:08:21.351 "data_size": 0 00:08:21.351 }, 00:08:21.351 { 00:08:21.351 "name": "BaseBdev3", 00:08:21.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.351 "is_configured": false, 00:08:21.351 "data_offset": 0, 00:08:21.351 "data_size": 0 00:08:21.351 } 00:08:21.351 ] 00:08:21.351 }' 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.351 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.610 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 [2024-11-26 21:15:39.689389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.611 BaseBdev2 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 [ 00:08:21.611 { 00:08:21.611 "name": "BaseBdev2", 00:08:21.611 "aliases": [ 00:08:21.611 "fc871d51-2c03-47eb-80ab-f33c0d091f1f" 00:08:21.611 ], 00:08:21.611 "product_name": "Malloc disk", 00:08:21.611 "block_size": 512, 00:08:21.611 "num_blocks": 65536, 00:08:21.611 "uuid": "fc871d51-2c03-47eb-80ab-f33c0d091f1f", 00:08:21.611 "assigned_rate_limits": { 00:08:21.611 "rw_ios_per_sec": 0, 00:08:21.611 "rw_mbytes_per_sec": 0, 00:08:21.611 "r_mbytes_per_sec": 0, 00:08:21.611 "w_mbytes_per_sec": 0 00:08:21.611 }, 00:08:21.611 "claimed": true, 00:08:21.611 "claim_type": "exclusive_write", 00:08:21.611 "zoned": false, 00:08:21.611 "supported_io_types": { 00:08:21.611 "read": true, 00:08:21.611 "write": true, 00:08:21.611 "unmap": true, 00:08:21.611 "flush": true, 00:08:21.611 "reset": true, 00:08:21.611 "nvme_admin": false, 00:08:21.611 "nvme_io": false, 00:08:21.611 "nvme_io_md": false, 00:08:21.611 "write_zeroes": true, 00:08:21.611 "zcopy": true, 00:08:21.611 "get_zone_info": false, 00:08:21.611 "zone_management": false, 00:08:21.611 "zone_append": false, 00:08:21.611 "compare": false, 00:08:21.611 "compare_and_write": false, 00:08:21.611 "abort": true, 00:08:21.611 "seek_hole": false, 00:08:21.611 "seek_data": false, 00:08:21.611 "copy": true, 00:08:21.611 "nvme_iov_md": false 00:08:21.611 }, 00:08:21.611 "memory_domains": [ 00:08:21.611 { 00:08:21.611 "dma_device_id": "system", 00:08:21.611 "dma_device_type": 1 00:08:21.611 }, 00:08:21.611 { 00:08:21.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.611 "dma_device_type": 2 00:08:21.611 } 00:08:21.611 ], 00:08:21.611 "driver_specific": {} 00:08:21.611 } 00:08:21.611 ] 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.870 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.870 "name": "Existed_Raid", 00:08:21.870 "uuid": "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68", 00:08:21.870 "strip_size_kb": 64, 00:08:21.870 "state": "configuring", 00:08:21.870 "raid_level": "raid0", 00:08:21.870 "superblock": true, 00:08:21.870 "num_base_bdevs": 3, 00:08:21.870 "num_base_bdevs_discovered": 2, 00:08:21.870 "num_base_bdevs_operational": 3, 00:08:21.870 "base_bdevs_list": [ 00:08:21.870 { 00:08:21.870 "name": "BaseBdev1", 00:08:21.870 "uuid": "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf", 00:08:21.870 "is_configured": true, 00:08:21.870 "data_offset": 2048, 00:08:21.870 "data_size": 63488 00:08:21.870 }, 00:08:21.870 { 00:08:21.870 "name": "BaseBdev2", 00:08:21.870 "uuid": "fc871d51-2c03-47eb-80ab-f33c0d091f1f", 00:08:21.870 "is_configured": true, 00:08:21.870 "data_offset": 2048, 00:08:21.870 "data_size": 63488 00:08:21.870 }, 00:08:21.870 { 00:08:21.870 "name": "BaseBdev3", 00:08:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.870 "is_configured": false, 00:08:21.870 "data_offset": 0, 00:08:21.870 "data_size": 0 00:08:21.870 } 00:08:21.870 ] 00:08:21.870 }' 00:08:21.870 21:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.870 21:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.130 [2024-11-26 21:15:40.195717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.130 [2024-11-26 21:15:40.196091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.130 [2024-11-26 21:15:40.196158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.130 [2024-11-26 21:15:40.196440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.130 [2024-11-26 21:15:40.196634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.130 [2024-11-26 21:15:40.196676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.130 BaseBdev3 00:08:22.130 [2024-11-26 21:15:40.196876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.130 [ 00:08:22.130 { 00:08:22.130 "name": "BaseBdev3", 00:08:22.130 "aliases": [ 00:08:22.130 "63d70b74-50f4-4b78-b007-8d0212a80fb2" 00:08:22.130 ], 00:08:22.130 "product_name": "Malloc disk", 00:08:22.130 "block_size": 512, 00:08:22.130 "num_blocks": 65536, 00:08:22.130 "uuid": "63d70b74-50f4-4b78-b007-8d0212a80fb2", 00:08:22.130 "assigned_rate_limits": { 00:08:22.130 "rw_ios_per_sec": 0, 00:08:22.130 "rw_mbytes_per_sec": 0, 00:08:22.130 "r_mbytes_per_sec": 0, 00:08:22.130 "w_mbytes_per_sec": 0 00:08:22.130 }, 00:08:22.130 "claimed": true, 00:08:22.130 "claim_type": "exclusive_write", 00:08:22.130 "zoned": false, 00:08:22.130 "supported_io_types": { 00:08:22.130 "read": true, 00:08:22.130 "write": true, 00:08:22.130 "unmap": true, 00:08:22.130 "flush": true, 00:08:22.130 "reset": true, 00:08:22.130 "nvme_admin": false, 00:08:22.130 "nvme_io": false, 00:08:22.130 "nvme_io_md": false, 00:08:22.130 "write_zeroes": true, 00:08:22.130 "zcopy": true, 00:08:22.130 "get_zone_info": false, 00:08:22.130 "zone_management": false, 00:08:22.130 "zone_append": false, 00:08:22.130 "compare": false, 00:08:22.130 "compare_and_write": false, 00:08:22.130 "abort": true, 00:08:22.130 "seek_hole": false, 00:08:22.130 "seek_data": false, 00:08:22.130 "copy": true, 00:08:22.130 "nvme_iov_md": false 00:08:22.130 }, 00:08:22.130 "memory_domains": [ 00:08:22.130 { 00:08:22.130 "dma_device_id": "system", 00:08:22.130 "dma_device_type": 1 00:08:22.130 }, 00:08:22.130 { 00:08:22.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.130 "dma_device_type": 2 00:08:22.130 } 00:08:22.130 ], 00:08:22.130 "driver_specific": {} 00:08:22.130 } 00:08:22.130 ] 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.130 "name": "Existed_Raid", 00:08:22.130 "uuid": "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68", 00:08:22.130 "strip_size_kb": 64, 00:08:22.130 "state": "online", 00:08:22.130 "raid_level": "raid0", 00:08:22.130 "superblock": true, 00:08:22.130 "num_base_bdevs": 3, 00:08:22.130 "num_base_bdevs_discovered": 3, 00:08:22.130 "num_base_bdevs_operational": 3, 00:08:22.130 "base_bdevs_list": [ 00:08:22.130 { 00:08:22.130 "name": "BaseBdev1", 00:08:22.130 "uuid": "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf", 00:08:22.130 "is_configured": true, 00:08:22.130 "data_offset": 2048, 00:08:22.130 "data_size": 63488 00:08:22.130 }, 00:08:22.130 { 00:08:22.130 "name": "BaseBdev2", 00:08:22.130 "uuid": "fc871d51-2c03-47eb-80ab-f33c0d091f1f", 00:08:22.130 "is_configured": true, 00:08:22.130 "data_offset": 2048, 00:08:22.130 "data_size": 63488 00:08:22.130 }, 00:08:22.130 { 00:08:22.130 "name": "BaseBdev3", 00:08:22.130 "uuid": "63d70b74-50f4-4b78-b007-8d0212a80fb2", 00:08:22.130 "is_configured": true, 00:08:22.130 "data_offset": 2048, 00:08:22.130 "data_size": 63488 00:08:22.130 } 00:08:22.130 ] 00:08:22.130 }' 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.130 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.698 [2024-11-26 21:15:40.659311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.698 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.698 "name": "Existed_Raid", 00:08:22.698 "aliases": [ 00:08:22.698 "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68" 00:08:22.698 ], 00:08:22.698 "product_name": "Raid Volume", 00:08:22.698 "block_size": 512, 00:08:22.698 "num_blocks": 190464, 00:08:22.698 "uuid": "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68", 00:08:22.698 "assigned_rate_limits": { 00:08:22.698 "rw_ios_per_sec": 0, 00:08:22.698 "rw_mbytes_per_sec": 0, 00:08:22.698 "r_mbytes_per_sec": 0, 00:08:22.698 "w_mbytes_per_sec": 0 00:08:22.698 }, 00:08:22.698 "claimed": false, 00:08:22.698 "zoned": false, 00:08:22.698 "supported_io_types": { 00:08:22.698 "read": true, 00:08:22.698 "write": true, 00:08:22.698 "unmap": true, 00:08:22.698 "flush": true, 00:08:22.698 "reset": true, 00:08:22.698 "nvme_admin": false, 00:08:22.698 "nvme_io": false, 00:08:22.698 "nvme_io_md": false, 00:08:22.698 "write_zeroes": true, 00:08:22.698 "zcopy": false, 00:08:22.698 "get_zone_info": false, 00:08:22.698 "zone_management": false, 00:08:22.698 "zone_append": false, 00:08:22.698 "compare": false, 00:08:22.698 "compare_and_write": false, 00:08:22.698 "abort": false, 00:08:22.698 "seek_hole": false, 00:08:22.698 "seek_data": false, 00:08:22.698 "copy": false, 00:08:22.698 "nvme_iov_md": false 00:08:22.698 }, 00:08:22.698 "memory_domains": [ 00:08:22.698 { 00:08:22.698 "dma_device_id": "system", 00:08:22.698 "dma_device_type": 1 00:08:22.698 }, 00:08:22.698 { 00:08:22.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.698 "dma_device_type": 2 00:08:22.698 }, 00:08:22.698 { 00:08:22.698 "dma_device_id": "system", 00:08:22.698 "dma_device_type": 1 00:08:22.698 }, 00:08:22.698 { 00:08:22.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.698 "dma_device_type": 2 00:08:22.698 }, 00:08:22.699 { 00:08:22.699 "dma_device_id": "system", 00:08:22.699 "dma_device_type": 1 00:08:22.699 }, 00:08:22.699 { 00:08:22.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.699 "dma_device_type": 2 00:08:22.699 } 00:08:22.699 ], 00:08:22.699 "driver_specific": { 00:08:22.699 "raid": { 00:08:22.699 "uuid": "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68", 00:08:22.699 "strip_size_kb": 64, 00:08:22.699 "state": "online", 00:08:22.699 "raid_level": "raid0", 00:08:22.699 "superblock": true, 00:08:22.699 "num_base_bdevs": 3, 00:08:22.699 "num_base_bdevs_discovered": 3, 00:08:22.699 "num_base_bdevs_operational": 3, 00:08:22.699 "base_bdevs_list": [ 00:08:22.699 { 00:08:22.699 "name": "BaseBdev1", 00:08:22.699 "uuid": "dd5a3ee0-8047-4aa2-8160-4d6ee32836bf", 00:08:22.699 "is_configured": true, 00:08:22.699 "data_offset": 2048, 00:08:22.699 "data_size": 63488 00:08:22.699 }, 00:08:22.699 { 00:08:22.699 "name": "BaseBdev2", 00:08:22.699 "uuid": "fc871d51-2c03-47eb-80ab-f33c0d091f1f", 00:08:22.699 "is_configured": true, 00:08:22.699 "data_offset": 2048, 00:08:22.699 "data_size": 63488 00:08:22.699 }, 00:08:22.699 { 00:08:22.699 "name": "BaseBdev3", 00:08:22.699 "uuid": "63d70b74-50f4-4b78-b007-8d0212a80fb2", 00:08:22.699 "is_configured": true, 00:08:22.699 "data_offset": 2048, 00:08:22.699 "data_size": 63488 00:08:22.699 } 00:08:22.699 ] 00:08:22.699 } 00:08:22.699 } 00:08:22.699 }' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:22.699 BaseBdev2 00:08:22.699 BaseBdev3' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.699 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.958 [2024-11-26 21:15:40.898606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.958 [2024-11-26 21:15:40.898634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.958 [2024-11-26 21:15:40.898682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:22.958 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.959 21:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.959 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.959 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.959 "name": "Existed_Raid", 00:08:22.959 "uuid": "629115ab-d8a5-4b10-ae8e-2b5f5ceb8a68", 00:08:22.959 "strip_size_kb": 64, 00:08:22.959 "state": "offline", 00:08:22.959 "raid_level": "raid0", 00:08:22.959 "superblock": true, 00:08:22.959 "num_base_bdevs": 3, 00:08:22.959 "num_base_bdevs_discovered": 2, 00:08:22.959 "num_base_bdevs_operational": 2, 00:08:22.959 "base_bdevs_list": [ 00:08:22.959 { 00:08:22.959 "name": null, 00:08:22.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.959 "is_configured": false, 00:08:22.959 "data_offset": 0, 00:08:22.959 "data_size": 63488 00:08:22.959 }, 00:08:22.959 { 00:08:22.959 "name": "BaseBdev2", 00:08:22.959 "uuid": "fc871d51-2c03-47eb-80ab-f33c0d091f1f", 00:08:22.959 "is_configured": true, 00:08:22.959 "data_offset": 2048, 00:08:22.959 "data_size": 63488 00:08:22.959 }, 00:08:22.959 { 00:08:22.959 "name": "BaseBdev3", 00:08:22.959 "uuid": "63d70b74-50f4-4b78-b007-8d0212a80fb2", 00:08:22.959 "is_configured": true, 00:08:22.959 "data_offset": 2048, 00:08:22.959 "data_size": 63488 00:08:22.959 } 00:08:22.959 ] 00:08:22.959 }' 00:08:22.959 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.959 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.528 [2024-11-26 21:15:41.455684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.528 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.528 [2024-11-26 21:15:41.599530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.528 [2024-11-26 21:15:41.599626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 BaseBdev2 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 [ 00:08:23.800 { 00:08:23.800 "name": "BaseBdev2", 00:08:23.800 "aliases": [ 00:08:23.800 "59ee2796-d125-428e-b8b0-750774e31ee5" 00:08:23.800 ], 00:08:23.800 "product_name": "Malloc disk", 00:08:23.800 "block_size": 512, 00:08:23.800 "num_blocks": 65536, 00:08:23.800 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:23.800 "assigned_rate_limits": { 00:08:23.800 "rw_ios_per_sec": 0, 00:08:23.800 "rw_mbytes_per_sec": 0, 00:08:23.800 "r_mbytes_per_sec": 0, 00:08:23.800 "w_mbytes_per_sec": 0 00:08:23.800 }, 00:08:23.800 "claimed": false, 00:08:23.800 "zoned": false, 00:08:23.800 "supported_io_types": { 00:08:23.800 "read": true, 00:08:23.800 "write": true, 00:08:23.800 "unmap": true, 00:08:23.800 "flush": true, 00:08:23.800 "reset": true, 00:08:23.800 "nvme_admin": false, 00:08:23.800 "nvme_io": false, 00:08:23.800 "nvme_io_md": false, 00:08:23.800 "write_zeroes": true, 00:08:23.800 "zcopy": true, 00:08:23.800 "get_zone_info": false, 00:08:23.800 "zone_management": false, 00:08:23.800 "zone_append": false, 00:08:23.800 "compare": false, 00:08:23.800 "compare_and_write": false, 00:08:23.800 "abort": true, 00:08:23.800 "seek_hole": false, 00:08:23.800 "seek_data": false, 00:08:23.800 "copy": true, 00:08:23.800 "nvme_iov_md": false 00:08:23.800 }, 00:08:23.800 "memory_domains": [ 00:08:23.800 { 00:08:23.800 "dma_device_id": "system", 00:08:23.800 "dma_device_type": 1 00:08:23.800 }, 00:08:23.800 { 00:08:23.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.800 "dma_device_type": 2 00:08:23.800 } 00:08:23.800 ], 00:08:23.800 "driver_specific": {} 00:08:23.800 } 00:08:23.800 ] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 BaseBdev3 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.800 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 [ 00:08:23.800 { 00:08:23.801 "name": "BaseBdev3", 00:08:23.801 "aliases": [ 00:08:23.801 "6297a072-5dc5-4924-9ddb-d3a117332e12" 00:08:23.801 ], 00:08:23.801 "product_name": "Malloc disk", 00:08:23.801 "block_size": 512, 00:08:23.801 "num_blocks": 65536, 00:08:23.801 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:23.801 "assigned_rate_limits": { 00:08:23.801 "rw_ios_per_sec": 0, 00:08:23.801 "rw_mbytes_per_sec": 0, 00:08:23.801 "r_mbytes_per_sec": 0, 00:08:23.801 "w_mbytes_per_sec": 0 00:08:23.801 }, 00:08:23.801 "claimed": false, 00:08:23.801 "zoned": false, 00:08:23.801 "supported_io_types": { 00:08:23.801 "read": true, 00:08:23.801 "write": true, 00:08:23.801 "unmap": true, 00:08:23.801 "flush": true, 00:08:23.801 "reset": true, 00:08:23.801 "nvme_admin": false, 00:08:23.801 "nvme_io": false, 00:08:23.801 "nvme_io_md": false, 00:08:23.801 "write_zeroes": true, 00:08:23.801 "zcopy": true, 00:08:23.801 "get_zone_info": false, 00:08:23.801 "zone_management": false, 00:08:23.801 "zone_append": false, 00:08:23.801 "compare": false, 00:08:23.801 "compare_and_write": false, 00:08:23.801 "abort": true, 00:08:23.801 "seek_hole": false, 00:08:23.801 "seek_data": false, 00:08:23.801 "copy": true, 00:08:23.801 "nvme_iov_md": false 00:08:23.801 }, 00:08:23.801 "memory_domains": [ 00:08:23.801 { 00:08:23.801 "dma_device_id": "system", 00:08:23.801 "dma_device_type": 1 00:08:23.801 }, 00:08:23.801 { 00:08:23.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.801 "dma_device_type": 2 00:08:23.801 } 00:08:23.801 ], 00:08:23.801 "driver_specific": {} 00:08:23.801 } 00:08:23.801 ] 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 [2024-11-26 21:15:41.893094] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.801 [2024-11-26 21:15:41.893202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.801 [2024-11-26 21:15:41.893245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.801 [2024-11-26 21:15:41.895019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.091 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.091 "name": "Existed_Raid", 00:08:24.091 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:24.091 "strip_size_kb": 64, 00:08:24.091 "state": "configuring", 00:08:24.091 "raid_level": "raid0", 00:08:24.091 "superblock": true, 00:08:24.091 "num_base_bdevs": 3, 00:08:24.091 "num_base_bdevs_discovered": 2, 00:08:24.091 "num_base_bdevs_operational": 3, 00:08:24.091 "base_bdevs_list": [ 00:08:24.091 { 00:08:24.091 "name": "BaseBdev1", 00:08:24.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.091 "is_configured": false, 00:08:24.091 "data_offset": 0, 00:08:24.091 "data_size": 0 00:08:24.091 }, 00:08:24.091 { 00:08:24.091 "name": "BaseBdev2", 00:08:24.091 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:24.091 "is_configured": true, 00:08:24.091 "data_offset": 2048, 00:08:24.091 "data_size": 63488 00:08:24.091 }, 00:08:24.091 { 00:08:24.091 "name": "BaseBdev3", 00:08:24.091 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:24.091 "is_configured": true, 00:08:24.091 "data_offset": 2048, 00:08:24.091 "data_size": 63488 00:08:24.091 } 00:08:24.091 ] 00:08:24.091 }' 00:08:24.091 21:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.091 21:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.351 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:24.351 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.351 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.351 [2024-11-26 21:15:42.368288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.351 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.352 "name": "Existed_Raid", 00:08:24.352 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:24.352 "strip_size_kb": 64, 00:08:24.352 "state": "configuring", 00:08:24.352 "raid_level": "raid0", 00:08:24.352 "superblock": true, 00:08:24.352 "num_base_bdevs": 3, 00:08:24.352 "num_base_bdevs_discovered": 1, 00:08:24.352 "num_base_bdevs_operational": 3, 00:08:24.352 "base_bdevs_list": [ 00:08:24.352 { 00:08:24.352 "name": "BaseBdev1", 00:08:24.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.352 "is_configured": false, 00:08:24.352 "data_offset": 0, 00:08:24.352 "data_size": 0 00:08:24.352 }, 00:08:24.352 { 00:08:24.352 "name": null, 00:08:24.352 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:24.352 "is_configured": false, 00:08:24.352 "data_offset": 0, 00:08:24.352 "data_size": 63488 00:08:24.352 }, 00:08:24.352 { 00:08:24.352 "name": "BaseBdev3", 00:08:24.352 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:24.352 "is_configured": true, 00:08:24.352 "data_offset": 2048, 00:08:24.352 "data_size": 63488 00:08:24.352 } 00:08:24.352 ] 00:08:24.352 }' 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.352 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.922 [2024-11-26 21:15:42.907459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.922 BaseBdev1 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.922 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.923 [ 00:08:24.923 { 00:08:24.923 "name": "BaseBdev1", 00:08:24.923 "aliases": [ 00:08:24.923 "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa" 00:08:24.923 ], 00:08:24.923 "product_name": "Malloc disk", 00:08:24.923 "block_size": 512, 00:08:24.923 "num_blocks": 65536, 00:08:24.923 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:24.923 "assigned_rate_limits": { 00:08:24.923 "rw_ios_per_sec": 0, 00:08:24.923 "rw_mbytes_per_sec": 0, 00:08:24.923 "r_mbytes_per_sec": 0, 00:08:24.923 "w_mbytes_per_sec": 0 00:08:24.923 }, 00:08:24.923 "claimed": true, 00:08:24.923 "claim_type": "exclusive_write", 00:08:24.923 "zoned": false, 00:08:24.923 "supported_io_types": { 00:08:24.923 "read": true, 00:08:24.923 "write": true, 00:08:24.923 "unmap": true, 00:08:24.923 "flush": true, 00:08:24.923 "reset": true, 00:08:24.923 "nvme_admin": false, 00:08:24.923 "nvme_io": false, 00:08:24.923 "nvme_io_md": false, 00:08:24.923 "write_zeroes": true, 00:08:24.923 "zcopy": true, 00:08:24.923 "get_zone_info": false, 00:08:24.923 "zone_management": false, 00:08:24.923 "zone_append": false, 00:08:24.923 "compare": false, 00:08:24.923 "compare_and_write": false, 00:08:24.923 "abort": true, 00:08:24.923 "seek_hole": false, 00:08:24.923 "seek_data": false, 00:08:24.923 "copy": true, 00:08:24.923 "nvme_iov_md": false 00:08:24.923 }, 00:08:24.923 "memory_domains": [ 00:08:24.923 { 00:08:24.923 "dma_device_id": "system", 00:08:24.923 "dma_device_type": 1 00:08:24.923 }, 00:08:24.923 { 00:08:24.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.923 "dma_device_type": 2 00:08:24.923 } 00:08:24.923 ], 00:08:24.923 "driver_specific": {} 00:08:24.923 } 00:08:24.923 ] 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.923 "name": "Existed_Raid", 00:08:24.923 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:24.923 "strip_size_kb": 64, 00:08:24.923 "state": "configuring", 00:08:24.923 "raid_level": "raid0", 00:08:24.923 "superblock": true, 00:08:24.923 "num_base_bdevs": 3, 00:08:24.923 "num_base_bdevs_discovered": 2, 00:08:24.923 "num_base_bdevs_operational": 3, 00:08:24.923 "base_bdevs_list": [ 00:08:24.923 { 00:08:24.923 "name": "BaseBdev1", 00:08:24.923 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:24.923 "is_configured": true, 00:08:24.923 "data_offset": 2048, 00:08:24.923 "data_size": 63488 00:08:24.923 }, 00:08:24.923 { 00:08:24.923 "name": null, 00:08:24.923 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:24.923 "is_configured": false, 00:08:24.923 "data_offset": 0, 00:08:24.923 "data_size": 63488 00:08:24.923 }, 00:08:24.923 { 00:08:24.923 "name": "BaseBdev3", 00:08:24.923 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:24.923 "is_configured": true, 00:08:24.923 "data_offset": 2048, 00:08:24.923 "data_size": 63488 00:08:24.923 } 00:08:24.923 ] 00:08:24.923 }' 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.923 21:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.182 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.182 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.182 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.182 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.443 [2024-11-26 21:15:43.386724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.443 "name": "Existed_Raid", 00:08:25.443 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:25.443 "strip_size_kb": 64, 00:08:25.443 "state": "configuring", 00:08:25.443 "raid_level": "raid0", 00:08:25.443 "superblock": true, 00:08:25.443 "num_base_bdevs": 3, 00:08:25.443 "num_base_bdevs_discovered": 1, 00:08:25.443 "num_base_bdevs_operational": 3, 00:08:25.443 "base_bdevs_list": [ 00:08:25.443 { 00:08:25.443 "name": "BaseBdev1", 00:08:25.443 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:25.443 "is_configured": true, 00:08:25.443 "data_offset": 2048, 00:08:25.443 "data_size": 63488 00:08:25.443 }, 00:08:25.443 { 00:08:25.443 "name": null, 00:08:25.443 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:25.443 "is_configured": false, 00:08:25.443 "data_offset": 0, 00:08:25.443 "data_size": 63488 00:08:25.443 }, 00:08:25.443 { 00:08:25.443 "name": null, 00:08:25.443 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:25.443 "is_configured": false, 00:08:25.443 "data_offset": 0, 00:08:25.443 "data_size": 63488 00:08:25.443 } 00:08:25.443 ] 00:08:25.443 }' 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.443 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.703 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 [2024-11-26 21:15:43.838019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.979 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.979 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.979 "name": "Existed_Raid", 00:08:25.979 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:25.979 "strip_size_kb": 64, 00:08:25.979 "state": "configuring", 00:08:25.979 "raid_level": "raid0", 00:08:25.979 "superblock": true, 00:08:25.979 "num_base_bdevs": 3, 00:08:25.979 "num_base_bdevs_discovered": 2, 00:08:25.979 "num_base_bdevs_operational": 3, 00:08:25.979 "base_bdevs_list": [ 00:08:25.979 { 00:08:25.979 "name": "BaseBdev1", 00:08:25.979 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:25.979 "is_configured": true, 00:08:25.979 "data_offset": 2048, 00:08:25.979 "data_size": 63488 00:08:25.979 }, 00:08:25.979 { 00:08:25.979 "name": null, 00:08:25.979 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:25.979 "is_configured": false, 00:08:25.979 "data_offset": 0, 00:08:25.979 "data_size": 63488 00:08:25.979 }, 00:08:25.979 { 00:08:25.979 "name": "BaseBdev3", 00:08:25.979 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:25.979 "is_configured": true, 00:08:25.979 "data_offset": 2048, 00:08:25.979 "data_size": 63488 00:08:25.979 } 00:08:25.979 ] 00:08:25.979 }' 00:08:25.979 21:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.979 21:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.238 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.239 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.239 [2024-11-26 21:15:44.321170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.499 "name": "Existed_Raid", 00:08:26.499 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:26.499 "strip_size_kb": 64, 00:08:26.499 "state": "configuring", 00:08:26.499 "raid_level": "raid0", 00:08:26.499 "superblock": true, 00:08:26.499 "num_base_bdevs": 3, 00:08:26.499 "num_base_bdevs_discovered": 1, 00:08:26.499 "num_base_bdevs_operational": 3, 00:08:26.499 "base_bdevs_list": [ 00:08:26.499 { 00:08:26.499 "name": null, 00:08:26.499 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:26.499 "is_configured": false, 00:08:26.499 "data_offset": 0, 00:08:26.499 "data_size": 63488 00:08:26.499 }, 00:08:26.499 { 00:08:26.499 "name": null, 00:08:26.499 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:26.499 "is_configured": false, 00:08:26.499 "data_offset": 0, 00:08:26.499 "data_size": 63488 00:08:26.499 }, 00:08:26.499 { 00:08:26.499 "name": "BaseBdev3", 00:08:26.499 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:26.499 "is_configured": true, 00:08:26.499 "data_offset": 2048, 00:08:26.499 "data_size": 63488 00:08:26.499 } 00:08:26.499 ] 00:08:26.499 }' 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.499 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 [2024-11-26 21:15:44.888116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.759 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.018 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.018 "name": "Existed_Raid", 00:08:27.018 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:27.018 "strip_size_kb": 64, 00:08:27.018 "state": "configuring", 00:08:27.018 "raid_level": "raid0", 00:08:27.018 "superblock": true, 00:08:27.018 "num_base_bdevs": 3, 00:08:27.018 "num_base_bdevs_discovered": 2, 00:08:27.018 "num_base_bdevs_operational": 3, 00:08:27.018 "base_bdevs_list": [ 00:08:27.018 { 00:08:27.018 "name": null, 00:08:27.018 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:27.018 "is_configured": false, 00:08:27.018 "data_offset": 0, 00:08:27.018 "data_size": 63488 00:08:27.018 }, 00:08:27.019 { 00:08:27.019 "name": "BaseBdev2", 00:08:27.019 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:27.019 "is_configured": true, 00:08:27.019 "data_offset": 2048, 00:08:27.019 "data_size": 63488 00:08:27.019 }, 00:08:27.019 { 00:08:27.019 "name": "BaseBdev3", 00:08:27.019 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:27.019 "is_configured": true, 00:08:27.019 "data_offset": 2048, 00:08:27.019 "data_size": 63488 00:08:27.019 } 00:08:27.019 ] 00:08:27.019 }' 00:08:27.019 21:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.019 21:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.280 [2024-11-26 21:15:45.368007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:27.280 [2024-11-26 21:15:45.368350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:27.280 [2024-11-26 21:15:45.368407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:27.280 [2024-11-26 21:15:45.368683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.280 [2024-11-26 21:15:45.368869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:27.280 [2024-11-26 21:15:45.368912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:27.280 NewBaseBdev 00:08:27.280 [2024-11-26 21:15:45.369106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.280 [ 00:08:27.280 { 00:08:27.280 "name": "NewBaseBdev", 00:08:27.280 "aliases": [ 00:08:27.280 "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa" 00:08:27.280 ], 00:08:27.280 "product_name": "Malloc disk", 00:08:27.280 "block_size": 512, 00:08:27.280 "num_blocks": 65536, 00:08:27.280 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:27.280 "assigned_rate_limits": { 00:08:27.280 "rw_ios_per_sec": 0, 00:08:27.280 "rw_mbytes_per_sec": 0, 00:08:27.280 "r_mbytes_per_sec": 0, 00:08:27.280 "w_mbytes_per_sec": 0 00:08:27.280 }, 00:08:27.280 "claimed": true, 00:08:27.280 "claim_type": "exclusive_write", 00:08:27.280 "zoned": false, 00:08:27.280 "supported_io_types": { 00:08:27.280 "read": true, 00:08:27.280 "write": true, 00:08:27.280 "unmap": true, 00:08:27.280 "flush": true, 00:08:27.280 "reset": true, 00:08:27.280 "nvme_admin": false, 00:08:27.280 "nvme_io": false, 00:08:27.280 "nvme_io_md": false, 00:08:27.280 "write_zeroes": true, 00:08:27.280 "zcopy": true, 00:08:27.280 "get_zone_info": false, 00:08:27.280 "zone_management": false, 00:08:27.280 "zone_append": false, 00:08:27.280 "compare": false, 00:08:27.280 "compare_and_write": false, 00:08:27.280 "abort": true, 00:08:27.280 "seek_hole": false, 00:08:27.280 "seek_data": false, 00:08:27.280 "copy": true, 00:08:27.280 "nvme_iov_md": false 00:08:27.280 }, 00:08:27.280 "memory_domains": [ 00:08:27.280 { 00:08:27.280 "dma_device_id": "system", 00:08:27.280 "dma_device_type": 1 00:08:27.280 }, 00:08:27.280 { 00:08:27.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.280 "dma_device_type": 2 00:08:27.280 } 00:08:27.280 ], 00:08:27.280 "driver_specific": {} 00:08:27.280 } 00:08:27.280 ] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.280 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.539 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.539 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.539 "name": "Existed_Raid", 00:08:27.539 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:27.539 "strip_size_kb": 64, 00:08:27.539 "state": "online", 00:08:27.539 "raid_level": "raid0", 00:08:27.539 "superblock": true, 00:08:27.539 "num_base_bdevs": 3, 00:08:27.539 "num_base_bdevs_discovered": 3, 00:08:27.539 "num_base_bdevs_operational": 3, 00:08:27.539 "base_bdevs_list": [ 00:08:27.539 { 00:08:27.539 "name": "NewBaseBdev", 00:08:27.539 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:27.539 "is_configured": true, 00:08:27.539 "data_offset": 2048, 00:08:27.539 "data_size": 63488 00:08:27.539 }, 00:08:27.539 { 00:08:27.539 "name": "BaseBdev2", 00:08:27.539 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:27.539 "is_configured": true, 00:08:27.539 "data_offset": 2048, 00:08:27.539 "data_size": 63488 00:08:27.539 }, 00:08:27.539 { 00:08:27.539 "name": "BaseBdev3", 00:08:27.539 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:27.539 "is_configured": true, 00:08:27.539 "data_offset": 2048, 00:08:27.539 "data_size": 63488 00:08:27.539 } 00:08:27.539 ] 00:08:27.539 }' 00:08:27.539 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.539 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.798 [2024-11-26 21:15:45.831643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.798 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.798 "name": "Existed_Raid", 00:08:27.798 "aliases": [ 00:08:27.798 "1fcdf3cf-c1c5-4d67-866c-35895e088a73" 00:08:27.798 ], 00:08:27.798 "product_name": "Raid Volume", 00:08:27.798 "block_size": 512, 00:08:27.798 "num_blocks": 190464, 00:08:27.798 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:27.798 "assigned_rate_limits": { 00:08:27.798 "rw_ios_per_sec": 0, 00:08:27.798 "rw_mbytes_per_sec": 0, 00:08:27.798 "r_mbytes_per_sec": 0, 00:08:27.798 "w_mbytes_per_sec": 0 00:08:27.798 }, 00:08:27.798 "claimed": false, 00:08:27.798 "zoned": false, 00:08:27.798 "supported_io_types": { 00:08:27.798 "read": true, 00:08:27.798 "write": true, 00:08:27.798 "unmap": true, 00:08:27.798 "flush": true, 00:08:27.798 "reset": true, 00:08:27.798 "nvme_admin": false, 00:08:27.798 "nvme_io": false, 00:08:27.798 "nvme_io_md": false, 00:08:27.798 "write_zeroes": true, 00:08:27.798 "zcopy": false, 00:08:27.798 "get_zone_info": false, 00:08:27.798 "zone_management": false, 00:08:27.798 "zone_append": false, 00:08:27.798 "compare": false, 00:08:27.798 "compare_and_write": false, 00:08:27.798 "abort": false, 00:08:27.798 "seek_hole": false, 00:08:27.798 "seek_data": false, 00:08:27.798 "copy": false, 00:08:27.798 "nvme_iov_md": false 00:08:27.798 }, 00:08:27.798 "memory_domains": [ 00:08:27.798 { 00:08:27.798 "dma_device_id": "system", 00:08:27.798 "dma_device_type": 1 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.798 "dma_device_type": 2 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "dma_device_id": "system", 00:08:27.798 "dma_device_type": 1 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.798 "dma_device_type": 2 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "dma_device_id": "system", 00:08:27.798 "dma_device_type": 1 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.798 "dma_device_type": 2 00:08:27.798 } 00:08:27.798 ], 00:08:27.798 "driver_specific": { 00:08:27.798 "raid": { 00:08:27.798 "uuid": "1fcdf3cf-c1c5-4d67-866c-35895e088a73", 00:08:27.798 "strip_size_kb": 64, 00:08:27.798 "state": "online", 00:08:27.798 "raid_level": "raid0", 00:08:27.798 "superblock": true, 00:08:27.798 "num_base_bdevs": 3, 00:08:27.798 "num_base_bdevs_discovered": 3, 00:08:27.798 "num_base_bdevs_operational": 3, 00:08:27.798 "base_bdevs_list": [ 00:08:27.798 { 00:08:27.798 "name": "NewBaseBdev", 00:08:27.798 "uuid": "2ba5a403-3beb-4443-9bc3-c9d4bc7c7bfa", 00:08:27.798 "is_configured": true, 00:08:27.798 "data_offset": 2048, 00:08:27.798 "data_size": 63488 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "name": "BaseBdev2", 00:08:27.798 "uuid": "59ee2796-d125-428e-b8b0-750774e31ee5", 00:08:27.798 "is_configured": true, 00:08:27.798 "data_offset": 2048, 00:08:27.798 "data_size": 63488 00:08:27.798 }, 00:08:27.798 { 00:08:27.798 "name": "BaseBdev3", 00:08:27.798 "uuid": "6297a072-5dc5-4924-9ddb-d3a117332e12", 00:08:27.798 "is_configured": true, 00:08:27.798 "data_offset": 2048, 00:08:27.798 "data_size": 63488 00:08:27.798 } 00:08:27.799 ] 00:08:27.799 } 00:08:27.799 } 00:08:27.799 }' 00:08:27.799 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.799 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:27.799 BaseBdev2 00:08:27.799 BaseBdev3' 00:08:27.799 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.057 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.057 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.057 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:28.057 21:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.057 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.057 21:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.057 [2024-11-26 21:15:46.135097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.057 [2024-11-26 21:15:46.135129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.057 [2024-11-26 21:15:46.135217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.057 [2024-11-26 21:15:46.135276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.057 [2024-11-26 21:15:46.135291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64294 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64294 ']' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64294 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64294 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64294' 00:08:28.057 killing process with pid 64294 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64294 00:08:28.057 [2024-11-26 21:15:46.187319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.057 21:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64294 00:08:28.622 [2024-11-26 21:15:46.554528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.001 21:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.001 00:08:30.001 real 0m10.676s 00:08:30.001 user 0m16.888s 00:08:30.001 sys 0m1.655s 00:08:30.001 21:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.001 ************************************ 00:08:30.001 END TEST raid_state_function_test_sb 00:08:30.001 ************************************ 00:08:30.001 21:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 21:15:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:30.001 21:15:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:30.001 21:15:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.001 21:15:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 ************************************ 00:08:30.001 START TEST raid_superblock_test 00:08:30.001 ************************************ 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64920 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64920 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64920 ']' 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.001 21:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 [2024-11-26 21:15:47.986759] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:30.001 [2024-11-26 21:15:47.986997] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64920 ] 00:08:30.001 [2024-11-26 21:15:48.151058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.261 [2024-11-26 21:15:48.263426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.520 [2024-11-26 21:15:48.457734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.520 [2024-11-26 21:15:48.457885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.805 malloc1 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.805 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.805 [2024-11-26 21:15:48.912166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.806 [2024-11-26 21:15:48.912275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.806 [2024-11-26 21:15:48.912318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:30.806 [2024-11-26 21:15:48.912347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.806 [2024-11-26 21:15:48.914469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.806 [2024-11-26 21:15:48.914542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.806 pt1 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.806 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 malloc2 00:08:31.087 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.087 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.087 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.087 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 [2024-11-26 21:15:48.966007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.087 [2024-11-26 21:15:48.966104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.087 [2024-11-26 21:15:48.966145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:31.087 [2024-11-26 21:15:48.966173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.087 [2024-11-26 21:15:48.968206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.088 [2024-11-26 21:15:48.968279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.088 pt2 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.088 21:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 malloc3 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 [2024-11-26 21:15:49.039268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:31.088 [2024-11-26 21:15:49.039408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.088 [2024-11-26 21:15:49.039451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:31.088 [2024-11-26 21:15:49.039479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.088 [2024-11-26 21:15:49.041635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.088 [2024-11-26 21:15:49.041678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:31.088 pt3 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 [2024-11-26 21:15:49.051307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.088 [2024-11-26 21:15:49.053156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.088 [2024-11-26 21:15:49.053228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:31.088 [2024-11-26 21:15:49.053403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:31.088 [2024-11-26 21:15:49.053417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.088 [2024-11-26 21:15:49.053675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:31.088 [2024-11-26 21:15:49.053835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:31.088 [2024-11-26 21:15:49.053844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:31.088 [2024-11-26 21:15:49.054033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.088 "name": "raid_bdev1", 00:08:31.088 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:31.088 "strip_size_kb": 64, 00:08:31.088 "state": "online", 00:08:31.088 "raid_level": "raid0", 00:08:31.088 "superblock": true, 00:08:31.088 "num_base_bdevs": 3, 00:08:31.088 "num_base_bdevs_discovered": 3, 00:08:31.088 "num_base_bdevs_operational": 3, 00:08:31.088 "base_bdevs_list": [ 00:08:31.088 { 00:08:31.088 "name": "pt1", 00:08:31.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.088 "is_configured": true, 00:08:31.088 "data_offset": 2048, 00:08:31.088 "data_size": 63488 00:08:31.088 }, 00:08:31.088 { 00:08:31.088 "name": "pt2", 00:08:31.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.088 "is_configured": true, 00:08:31.088 "data_offset": 2048, 00:08:31.088 "data_size": 63488 00:08:31.088 }, 00:08:31.088 { 00:08:31.088 "name": "pt3", 00:08:31.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.088 "is_configured": true, 00:08:31.088 "data_offset": 2048, 00:08:31.088 "data_size": 63488 00:08:31.088 } 00:08:31.088 ] 00:08:31.088 }' 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.088 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.347 [2024-11-26 21:15:49.414936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.347 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.347 "name": "raid_bdev1", 00:08:31.347 "aliases": [ 00:08:31.347 "9acc34e0-5f86-4891-ac90-027479b78a86" 00:08:31.347 ], 00:08:31.347 "product_name": "Raid Volume", 00:08:31.347 "block_size": 512, 00:08:31.347 "num_blocks": 190464, 00:08:31.347 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:31.347 "assigned_rate_limits": { 00:08:31.347 "rw_ios_per_sec": 0, 00:08:31.347 "rw_mbytes_per_sec": 0, 00:08:31.347 "r_mbytes_per_sec": 0, 00:08:31.347 "w_mbytes_per_sec": 0 00:08:31.347 }, 00:08:31.347 "claimed": false, 00:08:31.347 "zoned": false, 00:08:31.347 "supported_io_types": { 00:08:31.347 "read": true, 00:08:31.347 "write": true, 00:08:31.347 "unmap": true, 00:08:31.347 "flush": true, 00:08:31.347 "reset": true, 00:08:31.347 "nvme_admin": false, 00:08:31.347 "nvme_io": false, 00:08:31.347 "nvme_io_md": false, 00:08:31.347 "write_zeroes": true, 00:08:31.347 "zcopy": false, 00:08:31.347 "get_zone_info": false, 00:08:31.347 "zone_management": false, 00:08:31.347 "zone_append": false, 00:08:31.347 "compare": false, 00:08:31.347 "compare_and_write": false, 00:08:31.347 "abort": false, 00:08:31.347 "seek_hole": false, 00:08:31.347 "seek_data": false, 00:08:31.347 "copy": false, 00:08:31.347 "nvme_iov_md": false 00:08:31.347 }, 00:08:31.347 "memory_domains": [ 00:08:31.347 { 00:08:31.347 "dma_device_id": "system", 00:08:31.347 "dma_device_type": 1 00:08:31.347 }, 00:08:31.347 { 00:08:31.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.347 "dma_device_type": 2 00:08:31.347 }, 00:08:31.347 { 00:08:31.347 "dma_device_id": "system", 00:08:31.347 "dma_device_type": 1 00:08:31.347 }, 00:08:31.347 { 00:08:31.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.347 "dma_device_type": 2 00:08:31.347 }, 00:08:31.347 { 00:08:31.347 "dma_device_id": "system", 00:08:31.347 "dma_device_type": 1 00:08:31.347 }, 00:08:31.347 { 00:08:31.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.347 "dma_device_type": 2 00:08:31.347 } 00:08:31.347 ], 00:08:31.347 "driver_specific": { 00:08:31.347 "raid": { 00:08:31.347 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:31.347 "strip_size_kb": 64, 00:08:31.347 "state": "online", 00:08:31.347 "raid_level": "raid0", 00:08:31.347 "superblock": true, 00:08:31.347 "num_base_bdevs": 3, 00:08:31.347 "num_base_bdevs_discovered": 3, 00:08:31.347 "num_base_bdevs_operational": 3, 00:08:31.347 "base_bdevs_list": [ 00:08:31.347 { 00:08:31.347 "name": "pt1", 00:08:31.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.347 "is_configured": true, 00:08:31.347 "data_offset": 2048, 00:08:31.347 "data_size": 63488 00:08:31.347 }, 00:08:31.347 { 00:08:31.347 "name": "pt2", 00:08:31.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.348 "is_configured": true, 00:08:31.348 "data_offset": 2048, 00:08:31.348 "data_size": 63488 00:08:31.348 }, 00:08:31.348 { 00:08:31.348 "name": "pt3", 00:08:31.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.348 "is_configured": true, 00:08:31.348 "data_offset": 2048, 00:08:31.348 "data_size": 63488 00:08:31.348 } 00:08:31.348 ] 00:08:31.348 } 00:08:31.348 } 00:08:31.348 }' 00:08:31.348 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.607 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:31.608 pt2 00:08:31.608 pt3' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:31.608 [2024-11-26 21:15:49.666461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9acc34e0-5f86-4891-ac90-027479b78a86 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9acc34e0-5f86-4891-ac90-027479b78a86 ']' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 [2024-11-26 21:15:49.690123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.608 [2024-11-26 21:15:49.690152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.608 [2024-11-26 21:15:49.690237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.608 [2024-11-26 21:15:49.690296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.608 [2024-11-26 21:15:49.690305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.608 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.868 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.868 [2024-11-26 21:15:49.837975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:31.868 [2024-11-26 21:15:49.839817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:31.868 [2024-11-26 21:15:49.839873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:31.868 [2024-11-26 21:15:49.839928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:31.868 [2024-11-26 21:15:49.839996] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:31.868 [2024-11-26 21:15:49.840015] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:31.868 [2024-11-26 21:15:49.840032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.868 [2024-11-26 21:15:49.840043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:31.868 request: 00:08:31.868 { 00:08:31.868 "name": "raid_bdev1", 00:08:31.868 "raid_level": "raid0", 00:08:31.868 "base_bdevs": [ 00:08:31.868 "malloc1", 00:08:31.868 "malloc2", 00:08:31.868 "malloc3" 00:08:31.868 ], 00:08:31.868 "strip_size_kb": 64, 00:08:31.869 "superblock": false, 00:08:31.869 "method": "bdev_raid_create", 00:08:31.869 "req_id": 1 00:08:31.869 } 00:08:31.869 Got JSON-RPC error response 00:08:31.869 response: 00:08:31.869 { 00:08:31.869 "code": -17, 00:08:31.869 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:31.869 } 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.869 [2024-11-26 21:15:49.901766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:31.869 [2024-11-26 21:15:49.901889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.869 [2024-11-26 21:15:49.901929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:31.869 [2024-11-26 21:15:49.901977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.869 [2024-11-26 21:15:49.904155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.869 [2024-11-26 21:15:49.904227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:31.869 [2024-11-26 21:15:49.904335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:31.869 [2024-11-26 21:15:49.904425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.869 pt1 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.869 "name": "raid_bdev1", 00:08:31.869 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:31.869 "strip_size_kb": 64, 00:08:31.869 "state": "configuring", 00:08:31.869 "raid_level": "raid0", 00:08:31.869 "superblock": true, 00:08:31.869 "num_base_bdevs": 3, 00:08:31.869 "num_base_bdevs_discovered": 1, 00:08:31.869 "num_base_bdevs_operational": 3, 00:08:31.869 "base_bdevs_list": [ 00:08:31.869 { 00:08:31.869 "name": "pt1", 00:08:31.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.869 "is_configured": true, 00:08:31.869 "data_offset": 2048, 00:08:31.869 "data_size": 63488 00:08:31.869 }, 00:08:31.869 { 00:08:31.869 "name": null, 00:08:31.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.869 "is_configured": false, 00:08:31.869 "data_offset": 2048, 00:08:31.869 "data_size": 63488 00:08:31.869 }, 00:08:31.869 { 00:08:31.869 "name": null, 00:08:31.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.869 "is_configured": false, 00:08:31.869 "data_offset": 2048, 00:08:31.869 "data_size": 63488 00:08:31.869 } 00:08:31.869 ] 00:08:31.869 }' 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.869 21:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.128 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:32.128 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.128 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.128 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.388 [2024-11-26 21:15:50.285122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.388 [2024-11-26 21:15:50.285257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.388 [2024-11-26 21:15:50.285290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:32.388 [2024-11-26 21:15:50.285300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.388 [2024-11-26 21:15:50.285756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.388 [2024-11-26 21:15:50.285774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.388 [2024-11-26 21:15:50.285867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:32.388 [2024-11-26 21:15:50.285896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.388 pt2 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.388 [2024-11-26 21:15:50.297116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.388 "name": "raid_bdev1", 00:08:32.388 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:32.388 "strip_size_kb": 64, 00:08:32.388 "state": "configuring", 00:08:32.388 "raid_level": "raid0", 00:08:32.388 "superblock": true, 00:08:32.388 "num_base_bdevs": 3, 00:08:32.388 "num_base_bdevs_discovered": 1, 00:08:32.388 "num_base_bdevs_operational": 3, 00:08:32.388 "base_bdevs_list": [ 00:08:32.388 { 00:08:32.388 "name": "pt1", 00:08:32.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.388 "is_configured": true, 00:08:32.388 "data_offset": 2048, 00:08:32.388 "data_size": 63488 00:08:32.388 }, 00:08:32.388 { 00:08:32.388 "name": null, 00:08:32.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.388 "is_configured": false, 00:08:32.388 "data_offset": 0, 00:08:32.388 "data_size": 63488 00:08:32.388 }, 00:08:32.388 { 00:08:32.388 "name": null, 00:08:32.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.388 "is_configured": false, 00:08:32.388 "data_offset": 2048, 00:08:32.388 "data_size": 63488 00:08:32.388 } 00:08:32.388 ] 00:08:32.388 }' 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.388 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.648 [2024-11-26 21:15:50.696394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.648 [2024-11-26 21:15:50.696513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.648 [2024-11-26 21:15:50.696551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:32.648 [2024-11-26 21:15:50.696581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.648 [2024-11-26 21:15:50.697082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.648 [2024-11-26 21:15:50.697148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.648 [2024-11-26 21:15:50.697261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:32.648 [2024-11-26 21:15:50.697314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.648 pt2 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.648 [2024-11-26 21:15:50.708346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.648 [2024-11-26 21:15:50.708446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.648 [2024-11-26 21:15:50.708478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:32.648 [2024-11-26 21:15:50.708507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.648 [2024-11-26 21:15:50.708899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.648 [2024-11-26 21:15:50.708974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.648 [2024-11-26 21:15:50.709070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:32.648 [2024-11-26 21:15:50.709121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.648 [2024-11-26 21:15:50.709266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.648 [2024-11-26 21:15:50.709305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.648 [2024-11-26 21:15:50.709570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:32.648 [2024-11-26 21:15:50.709756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.648 [2024-11-26 21:15:50.709794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.648 [2024-11-26 21:15:50.709984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.648 pt3 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.648 "name": "raid_bdev1", 00:08:32.648 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:32.648 "strip_size_kb": 64, 00:08:32.648 "state": "online", 00:08:32.648 "raid_level": "raid0", 00:08:32.648 "superblock": true, 00:08:32.648 "num_base_bdevs": 3, 00:08:32.648 "num_base_bdevs_discovered": 3, 00:08:32.648 "num_base_bdevs_operational": 3, 00:08:32.648 "base_bdevs_list": [ 00:08:32.648 { 00:08:32.648 "name": "pt1", 00:08:32.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.648 "is_configured": true, 00:08:32.648 "data_offset": 2048, 00:08:32.648 "data_size": 63488 00:08:32.648 }, 00:08:32.648 { 00:08:32.648 "name": "pt2", 00:08:32.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.648 "is_configured": true, 00:08:32.648 "data_offset": 2048, 00:08:32.648 "data_size": 63488 00:08:32.648 }, 00:08:32.648 { 00:08:32.648 "name": "pt3", 00:08:32.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.648 "is_configured": true, 00:08:32.648 "data_offset": 2048, 00:08:32.648 "data_size": 63488 00:08:32.648 } 00:08:32.648 ] 00:08:32.648 }' 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.648 21:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.217 [2024-11-26 21:15:51.144021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.217 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.217 "name": "raid_bdev1", 00:08:33.217 "aliases": [ 00:08:33.217 "9acc34e0-5f86-4891-ac90-027479b78a86" 00:08:33.217 ], 00:08:33.217 "product_name": "Raid Volume", 00:08:33.217 "block_size": 512, 00:08:33.217 "num_blocks": 190464, 00:08:33.217 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:33.217 "assigned_rate_limits": { 00:08:33.217 "rw_ios_per_sec": 0, 00:08:33.217 "rw_mbytes_per_sec": 0, 00:08:33.217 "r_mbytes_per_sec": 0, 00:08:33.217 "w_mbytes_per_sec": 0 00:08:33.217 }, 00:08:33.217 "claimed": false, 00:08:33.217 "zoned": false, 00:08:33.217 "supported_io_types": { 00:08:33.217 "read": true, 00:08:33.217 "write": true, 00:08:33.217 "unmap": true, 00:08:33.217 "flush": true, 00:08:33.217 "reset": true, 00:08:33.217 "nvme_admin": false, 00:08:33.217 "nvme_io": false, 00:08:33.217 "nvme_io_md": false, 00:08:33.217 "write_zeroes": true, 00:08:33.217 "zcopy": false, 00:08:33.217 "get_zone_info": false, 00:08:33.217 "zone_management": false, 00:08:33.217 "zone_append": false, 00:08:33.217 "compare": false, 00:08:33.217 "compare_and_write": false, 00:08:33.217 "abort": false, 00:08:33.217 "seek_hole": false, 00:08:33.217 "seek_data": false, 00:08:33.217 "copy": false, 00:08:33.217 "nvme_iov_md": false 00:08:33.217 }, 00:08:33.217 "memory_domains": [ 00:08:33.217 { 00:08:33.217 "dma_device_id": "system", 00:08:33.217 "dma_device_type": 1 00:08:33.217 }, 00:08:33.217 { 00:08:33.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.217 "dma_device_type": 2 00:08:33.217 }, 00:08:33.217 { 00:08:33.217 "dma_device_id": "system", 00:08:33.217 "dma_device_type": 1 00:08:33.217 }, 00:08:33.217 { 00:08:33.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.217 "dma_device_type": 2 00:08:33.217 }, 00:08:33.217 { 00:08:33.217 "dma_device_id": "system", 00:08:33.217 "dma_device_type": 1 00:08:33.217 }, 00:08:33.217 { 00:08:33.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.217 "dma_device_type": 2 00:08:33.217 } 00:08:33.217 ], 00:08:33.217 "driver_specific": { 00:08:33.217 "raid": { 00:08:33.217 "uuid": "9acc34e0-5f86-4891-ac90-027479b78a86", 00:08:33.217 "strip_size_kb": 64, 00:08:33.217 "state": "online", 00:08:33.217 "raid_level": "raid0", 00:08:33.217 "superblock": true, 00:08:33.217 "num_base_bdevs": 3, 00:08:33.217 "num_base_bdevs_discovered": 3, 00:08:33.217 "num_base_bdevs_operational": 3, 00:08:33.217 "base_bdevs_list": [ 00:08:33.217 { 00:08:33.217 "name": "pt1", 00:08:33.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.217 "is_configured": true, 00:08:33.217 "data_offset": 2048, 00:08:33.217 "data_size": 63488 00:08:33.217 }, 00:08:33.217 { 00:08:33.217 "name": "pt2", 00:08:33.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.217 "is_configured": true, 00:08:33.217 "data_offset": 2048, 00:08:33.218 "data_size": 63488 00:08:33.218 }, 00:08:33.218 { 00:08:33.218 "name": "pt3", 00:08:33.218 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.218 "is_configured": true, 00:08:33.218 "data_offset": 2048, 00:08:33.218 "data_size": 63488 00:08:33.218 } 00:08:33.218 ] 00:08:33.218 } 00:08:33.218 } 00:08:33.218 }' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.218 pt2 00:08:33.218 pt3' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.218 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.477 [2024-11-26 21:15:51.375559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9acc34e0-5f86-4891-ac90-027479b78a86 '!=' 9acc34e0-5f86-4891-ac90-027479b78a86 ']' 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64920 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64920 ']' 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64920 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64920 00:08:33.477 killing process with pid 64920 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64920' 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64920 00:08:33.477 [2024-11-26 21:15:51.454010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.477 [2024-11-26 21:15:51.454106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.477 [2024-11-26 21:15:51.454164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.477 [2024-11-26 21:15:51.454174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.477 21:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64920 00:08:33.736 [2024-11-26 21:15:51.747613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.116 21:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:35.116 00:08:35.116 real 0m4.912s 00:08:35.116 user 0m6.953s 00:08:35.116 sys 0m0.779s 00:08:35.116 21:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.116 21:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.116 ************************************ 00:08:35.116 END TEST raid_superblock_test 00:08:35.116 ************************************ 00:08:35.116 21:15:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:35.116 21:15:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.116 21:15:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.116 21:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.116 ************************************ 00:08:35.116 START TEST raid_read_error_test 00:08:35.116 ************************************ 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LfN4FW1tBJ 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65172 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65172 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65172 ']' 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.116 21:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.116 [2024-11-26 21:15:52.980523] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:35.116 [2024-11-26 21:15:52.980725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65172 ] 00:08:35.116 [2024-11-26 21:15:53.152644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.116 [2024-11-26 21:15:53.265153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.375 [2024-11-26 21:15:53.445628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.376 [2024-11-26 21:15:53.445775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.944 BaseBdev1_malloc 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.944 true 00:08:35.944 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 [2024-11-26 21:15:53.882104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:35.945 [2024-11-26 21:15:53.882216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.945 [2024-11-26 21:15:53.882240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:35.945 [2024-11-26 21:15:53.882253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.945 [2024-11-26 21:15:53.884354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.945 [2024-11-26 21:15:53.884396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:35.945 BaseBdev1 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 BaseBdev2_malloc 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 true 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 [2024-11-26 21:15:53.951921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:35.945 [2024-11-26 21:15:53.951985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.945 [2024-11-26 21:15:53.952003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:35.945 [2024-11-26 21:15:53.952013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.945 [2024-11-26 21:15:53.954119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.945 [2024-11-26 21:15:53.954157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:35.945 BaseBdev2 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 BaseBdev3_malloc 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 true 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 [2024-11-26 21:15:54.031071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:35.945 [2024-11-26 21:15:54.031125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.945 [2024-11-26 21:15:54.031142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:35.945 [2024-11-26 21:15:54.031153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.945 [2024-11-26 21:15:54.033184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.945 [2024-11-26 21:15:54.033278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:35.945 BaseBdev3 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 [2024-11-26 21:15:54.043128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.945 [2024-11-26 21:15:54.044869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.945 [2024-11-26 21:15:54.045015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.945 [2024-11-26 21:15:54.045217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.945 [2024-11-26 21:15:54.045232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.945 [2024-11-26 21:15:54.045476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:35.945 [2024-11-26 21:15:54.045641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.945 [2024-11-26 21:15:54.045654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:35.945 [2024-11-26 21:15:54.045795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.945 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.945 "name": "raid_bdev1", 00:08:35.945 "uuid": "1452a3b7-22be-47e6-984c-5f7c8a55560b", 00:08:35.945 "strip_size_kb": 64, 00:08:35.945 "state": "online", 00:08:35.945 "raid_level": "raid0", 00:08:35.945 "superblock": true, 00:08:35.945 "num_base_bdevs": 3, 00:08:35.945 "num_base_bdevs_discovered": 3, 00:08:35.945 "num_base_bdevs_operational": 3, 00:08:35.945 "base_bdevs_list": [ 00:08:35.945 { 00:08:35.945 "name": "BaseBdev1", 00:08:35.945 "uuid": "8c9f70d9-a344-5f1d-a9cc-fdb7a1eac177", 00:08:35.945 "is_configured": true, 00:08:35.946 "data_offset": 2048, 00:08:35.946 "data_size": 63488 00:08:35.946 }, 00:08:35.946 { 00:08:35.946 "name": "BaseBdev2", 00:08:35.946 "uuid": "cc910ecb-b7a9-5518-99cc-4575e23e85c4", 00:08:35.946 "is_configured": true, 00:08:35.946 "data_offset": 2048, 00:08:35.946 "data_size": 63488 00:08:35.946 }, 00:08:35.946 { 00:08:35.946 "name": "BaseBdev3", 00:08:35.946 "uuid": "629aa51a-a323-5a69-a623-54ba638a0409", 00:08:35.946 "is_configured": true, 00:08:35.946 "data_offset": 2048, 00:08:35.946 "data_size": 63488 00:08:35.946 } 00:08:35.946 ] 00:08:35.946 }' 00:08:36.205 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.205 21:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.465 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.465 21:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.465 [2024-11-26 21:15:54.551656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.405 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.405 "name": "raid_bdev1", 00:08:37.405 "uuid": "1452a3b7-22be-47e6-984c-5f7c8a55560b", 00:08:37.405 "strip_size_kb": 64, 00:08:37.405 "state": "online", 00:08:37.405 "raid_level": "raid0", 00:08:37.405 "superblock": true, 00:08:37.405 "num_base_bdevs": 3, 00:08:37.405 "num_base_bdevs_discovered": 3, 00:08:37.405 "num_base_bdevs_operational": 3, 00:08:37.405 "base_bdevs_list": [ 00:08:37.405 { 00:08:37.405 "name": "BaseBdev1", 00:08:37.405 "uuid": "8c9f70d9-a344-5f1d-a9cc-fdb7a1eac177", 00:08:37.405 "is_configured": true, 00:08:37.405 "data_offset": 2048, 00:08:37.405 "data_size": 63488 00:08:37.405 }, 00:08:37.405 { 00:08:37.405 "name": "BaseBdev2", 00:08:37.405 "uuid": "cc910ecb-b7a9-5518-99cc-4575e23e85c4", 00:08:37.405 "is_configured": true, 00:08:37.406 "data_offset": 2048, 00:08:37.406 "data_size": 63488 00:08:37.406 }, 00:08:37.406 { 00:08:37.406 "name": "BaseBdev3", 00:08:37.406 "uuid": "629aa51a-a323-5a69-a623-54ba638a0409", 00:08:37.406 "is_configured": true, 00:08:37.406 "data_offset": 2048, 00:08:37.406 "data_size": 63488 00:08:37.406 } 00:08:37.406 ] 00:08:37.406 }' 00:08:37.406 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.406 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.976 [2024-11-26 21:15:55.923091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.976 [2024-11-26 21:15:55.923194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.976 [2024-11-26 21:15:55.926052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.976 [2024-11-26 21:15:55.926135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.976 [2024-11-26 21:15:55.926190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.976 [2024-11-26 21:15:55.926229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:37.976 { 00:08:37.976 "results": [ 00:08:37.976 { 00:08:37.976 "job": "raid_bdev1", 00:08:37.976 "core_mask": "0x1", 00:08:37.976 "workload": "randrw", 00:08:37.976 "percentage": 50, 00:08:37.976 "status": "finished", 00:08:37.976 "queue_depth": 1, 00:08:37.976 "io_size": 131072, 00:08:37.976 "runtime": 1.372408, 00:08:37.976 "iops": 16216.751869706384, 00:08:37.976 "mibps": 2027.093983713298, 00:08:37.976 "io_failed": 1, 00:08:37.976 "io_timeout": 0, 00:08:37.976 "avg_latency_us": 85.35309015190354, 00:08:37.976 "min_latency_us": 25.2646288209607, 00:08:37.976 "max_latency_us": 1366.5257641921398 00:08:37.976 } 00:08:37.976 ], 00:08:37.976 "core_count": 1 00:08:37.976 } 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65172 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65172 ']' 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65172 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65172 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.976 killing process with pid 65172 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65172' 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65172 00:08:37.976 [2024-11-26 21:15:55.957288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.976 21:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65172 00:08:38.236 [2024-11-26 21:15:56.181560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LfN4FW1tBJ 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:39.176 ************************************ 00:08:39.176 END TEST raid_read_error_test 00:08:39.176 ************************************ 00:08:39.176 00:08:39.176 real 0m4.435s 00:08:39.176 user 0m5.278s 00:08:39.176 sys 0m0.519s 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.176 21:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.435 21:15:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:39.435 21:15:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.435 21:15:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.435 21:15:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.435 ************************************ 00:08:39.435 START TEST raid_write_error_test 00:08:39.435 ************************************ 00:08:39.435 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:39.435 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:39.435 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YzUL0eUkWr 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65313 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65313 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65313 ']' 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.436 21:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.436 [2024-11-26 21:15:57.491381] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:39.436 [2024-11-26 21:15:57.491598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65313 ] 00:08:39.695 [2024-11-26 21:15:57.651227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.695 [2024-11-26 21:15:57.760274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.955 [2024-11-26 21:15:57.960060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.955 [2024-11-26 21:15:57.960173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.215 BaseBdev1_malloc 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.215 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 true 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 [2024-11-26 21:15:58.375080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.476 [2024-11-26 21:15:58.375135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.476 [2024-11-26 21:15:58.375154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:40.476 [2024-11-26 21:15:58.375164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.476 [2024-11-26 21:15:58.377176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.476 [2024-11-26 21:15:58.377216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.476 BaseBdev1 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 BaseBdev2_malloc 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 true 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 [2024-11-26 21:15:58.438358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.476 [2024-11-26 21:15:58.438477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.476 [2024-11-26 21:15:58.438499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:40.476 [2024-11-26 21:15:58.438509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.476 [2024-11-26 21:15:58.440515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.476 [2024-11-26 21:15:58.440554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.476 BaseBdev2 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 BaseBdev3_malloc 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 true 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 [2024-11-26 21:15:58.521600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:40.476 [2024-11-26 21:15:58.521650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.476 [2024-11-26 21:15:58.521666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:40.476 [2024-11-26 21:15:58.521676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.476 [2024-11-26 21:15:58.523685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.476 [2024-11-26 21:15:58.523801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:40.476 BaseBdev3 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 [2024-11-26 21:15:58.533654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.476 [2024-11-26 21:15:58.535358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.476 [2024-11-26 21:15:58.535427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.476 [2024-11-26 21:15:58.535624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.476 [2024-11-26 21:15:58.535638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.476 [2024-11-26 21:15:58.535889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:40.476 [2024-11-26 21:15:58.536063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.476 [2024-11-26 21:15:58.536078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:40.476 [2024-11-26 21:15:58.536223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.476 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.476 "name": "raid_bdev1", 00:08:40.476 "uuid": "0bf85177-a5e2-4c5f-b22e-1a2154eb80c9", 00:08:40.476 "strip_size_kb": 64, 00:08:40.476 "state": "online", 00:08:40.477 "raid_level": "raid0", 00:08:40.477 "superblock": true, 00:08:40.477 "num_base_bdevs": 3, 00:08:40.477 "num_base_bdevs_discovered": 3, 00:08:40.477 "num_base_bdevs_operational": 3, 00:08:40.477 "base_bdevs_list": [ 00:08:40.477 { 00:08:40.477 "name": "BaseBdev1", 00:08:40.477 "uuid": "c063e765-968c-5b7c-b1f4-ad946d1697ca", 00:08:40.477 "is_configured": true, 00:08:40.477 "data_offset": 2048, 00:08:40.477 "data_size": 63488 00:08:40.477 }, 00:08:40.477 { 00:08:40.477 "name": "BaseBdev2", 00:08:40.477 "uuid": "53b4988c-e42d-5d0e-96b8-4bb1d7c4201f", 00:08:40.477 "is_configured": true, 00:08:40.477 "data_offset": 2048, 00:08:40.477 "data_size": 63488 00:08:40.477 }, 00:08:40.477 { 00:08:40.477 "name": "BaseBdev3", 00:08:40.477 "uuid": "e7ab4715-5db0-5546-82e1-623dcc510b29", 00:08:40.477 "is_configured": true, 00:08:40.477 "data_offset": 2048, 00:08:40.477 "data_size": 63488 00:08:40.477 } 00:08:40.477 ] 00:08:40.477 }' 00:08:40.477 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.477 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.045 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.045 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:41.045 [2024-11-26 21:15:59.042038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.982 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.982 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.982 21:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.982 "name": "raid_bdev1", 00:08:41.982 "uuid": "0bf85177-a5e2-4c5f-b22e-1a2154eb80c9", 00:08:41.982 "strip_size_kb": 64, 00:08:41.982 "state": "online", 00:08:41.982 "raid_level": "raid0", 00:08:41.982 "superblock": true, 00:08:41.982 "num_base_bdevs": 3, 00:08:41.982 "num_base_bdevs_discovered": 3, 00:08:41.982 "num_base_bdevs_operational": 3, 00:08:41.982 "base_bdevs_list": [ 00:08:41.982 { 00:08:41.982 "name": "BaseBdev1", 00:08:41.982 "uuid": "c063e765-968c-5b7c-b1f4-ad946d1697ca", 00:08:41.982 "is_configured": true, 00:08:41.982 "data_offset": 2048, 00:08:41.982 "data_size": 63488 00:08:41.982 }, 00:08:41.982 { 00:08:41.982 "name": "BaseBdev2", 00:08:41.982 "uuid": "53b4988c-e42d-5d0e-96b8-4bb1d7c4201f", 00:08:41.982 "is_configured": true, 00:08:41.982 "data_offset": 2048, 00:08:41.982 "data_size": 63488 00:08:41.982 }, 00:08:41.982 { 00:08:41.982 "name": "BaseBdev3", 00:08:41.982 "uuid": "e7ab4715-5db0-5546-82e1-623dcc510b29", 00:08:41.982 "is_configured": true, 00:08:41.982 "data_offset": 2048, 00:08:41.982 "data_size": 63488 00:08:41.982 } 00:08:41.982 ] 00:08:41.982 }' 00:08:41.982 21:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.982 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.551 21:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.551 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.551 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.551 [2024-11-26 21:16:00.429831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.551 [2024-11-26 21:16:00.429926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.551 [2024-11-26 21:16:00.432750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.551 [2024-11-26 21:16:00.432795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.551 [2024-11-26 21:16:00.432832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.551 [2024-11-26 21:16:00.432842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:42.551 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.551 21:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65313 00:08:42.551 { 00:08:42.551 "results": [ 00:08:42.551 { 00:08:42.551 "job": "raid_bdev1", 00:08:42.551 "core_mask": "0x1", 00:08:42.551 "workload": "randrw", 00:08:42.551 "percentage": 50, 00:08:42.551 "status": "finished", 00:08:42.551 "queue_depth": 1, 00:08:42.551 "io_size": 131072, 00:08:42.551 "runtime": 1.388848, 00:08:42.551 "iops": 16213.437323594806, 00:08:42.551 "mibps": 2026.6796654493508, 00:08:42.551 "io_failed": 1, 00:08:42.551 "io_timeout": 0, 00:08:42.551 "avg_latency_us": 85.34316589717251, 00:08:42.551 "min_latency_us": 24.817467248908297, 00:08:42.551 "max_latency_us": 1445.2262008733624 00:08:42.551 } 00:08:42.551 ], 00:08:42.551 "core_count": 1 00:08:42.551 } 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65313 ']' 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65313 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65313 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65313' 00:08:42.552 killing process with pid 65313 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65313 00:08:42.552 [2024-11-26 21:16:00.478372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.552 21:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65313 00:08:42.552 [2024-11-26 21:16:00.703777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YzUL0eUkWr 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:43.931 00:08:43.931 real 0m4.455s 00:08:43.931 user 0m5.282s 00:08:43.931 sys 0m0.531s 00:08:43.931 ************************************ 00:08:43.931 END TEST raid_write_error_test 00:08:43.931 ************************************ 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.931 21:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.931 21:16:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:43.931 21:16:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:43.931 21:16:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.931 21:16:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.931 21:16:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.931 ************************************ 00:08:43.931 START TEST raid_state_function_test 00:08:43.931 ************************************ 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:43.931 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65451 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.932 Process raid pid: 65451 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65451' 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65451 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65451 ']' 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.932 21:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.932 [2024-11-26 21:16:01.999271] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:43.932 [2024-11-26 21:16:01.999471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.193 [2024-11-26 21:16:02.173023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.193 [2024-11-26 21:16:02.284067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.455 [2024-11-26 21:16:02.479340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.455 [2024-11-26 21:16:02.479472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.714 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.714 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.714 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.714 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.714 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.974 [2024-11-26 21:16:02.873343] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.974 [2024-11-26 21:16:02.873395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.974 [2024-11-26 21:16:02.873406] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.974 [2024-11-26 21:16:02.873415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.974 [2024-11-26 21:16:02.873421] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.974 [2024-11-26 21:16:02.873429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.974 "name": "Existed_Raid", 00:08:44.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.974 "strip_size_kb": 64, 00:08:44.974 "state": "configuring", 00:08:44.974 "raid_level": "concat", 00:08:44.974 "superblock": false, 00:08:44.974 "num_base_bdevs": 3, 00:08:44.974 "num_base_bdevs_discovered": 0, 00:08:44.974 "num_base_bdevs_operational": 3, 00:08:44.974 "base_bdevs_list": [ 00:08:44.974 { 00:08:44.974 "name": "BaseBdev1", 00:08:44.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.974 "is_configured": false, 00:08:44.974 "data_offset": 0, 00:08:44.974 "data_size": 0 00:08:44.974 }, 00:08:44.974 { 00:08:44.974 "name": "BaseBdev2", 00:08:44.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.974 "is_configured": false, 00:08:44.974 "data_offset": 0, 00:08:44.974 "data_size": 0 00:08:44.974 }, 00:08:44.974 { 00:08:44.974 "name": "BaseBdev3", 00:08:44.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.974 "is_configured": false, 00:08:44.974 "data_offset": 0, 00:08:44.974 "data_size": 0 00:08:44.974 } 00:08:44.974 ] 00:08:44.974 }' 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.974 21:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.233 [2024-11-26 21:16:03.312520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.233 [2024-11-26 21:16:03.312602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.233 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.233 [2024-11-26 21:16:03.324509] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.233 [2024-11-26 21:16:03.324588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.233 [2024-11-26 21:16:03.324615] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.233 [2024-11-26 21:16:03.324637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.233 [2024-11-26 21:16:03.324654] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.233 [2024-11-26 21:16:03.324674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.234 [2024-11-26 21:16:03.372060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.234 BaseBdev1 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.234 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.494 [ 00:08:45.494 { 00:08:45.494 "name": "BaseBdev1", 00:08:45.494 "aliases": [ 00:08:45.494 "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b" 00:08:45.494 ], 00:08:45.494 "product_name": "Malloc disk", 00:08:45.494 "block_size": 512, 00:08:45.494 "num_blocks": 65536, 00:08:45.494 "uuid": "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b", 00:08:45.494 "assigned_rate_limits": { 00:08:45.494 "rw_ios_per_sec": 0, 00:08:45.494 "rw_mbytes_per_sec": 0, 00:08:45.494 "r_mbytes_per_sec": 0, 00:08:45.494 "w_mbytes_per_sec": 0 00:08:45.494 }, 00:08:45.494 "claimed": true, 00:08:45.494 "claim_type": "exclusive_write", 00:08:45.494 "zoned": false, 00:08:45.494 "supported_io_types": { 00:08:45.494 "read": true, 00:08:45.494 "write": true, 00:08:45.494 "unmap": true, 00:08:45.494 "flush": true, 00:08:45.494 "reset": true, 00:08:45.494 "nvme_admin": false, 00:08:45.494 "nvme_io": false, 00:08:45.494 "nvme_io_md": false, 00:08:45.494 "write_zeroes": true, 00:08:45.494 "zcopy": true, 00:08:45.494 "get_zone_info": false, 00:08:45.494 "zone_management": false, 00:08:45.494 "zone_append": false, 00:08:45.494 "compare": false, 00:08:45.494 "compare_and_write": false, 00:08:45.494 "abort": true, 00:08:45.494 "seek_hole": false, 00:08:45.494 "seek_data": false, 00:08:45.494 "copy": true, 00:08:45.494 "nvme_iov_md": false 00:08:45.494 }, 00:08:45.494 "memory_domains": [ 00:08:45.494 { 00:08:45.494 "dma_device_id": "system", 00:08:45.494 "dma_device_type": 1 00:08:45.494 }, 00:08:45.494 { 00:08:45.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.494 "dma_device_type": 2 00:08:45.494 } 00:08:45.494 ], 00:08:45.494 "driver_specific": {} 00:08:45.494 } 00:08:45.494 ] 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.494 "name": "Existed_Raid", 00:08:45.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.494 "strip_size_kb": 64, 00:08:45.494 "state": "configuring", 00:08:45.494 "raid_level": "concat", 00:08:45.494 "superblock": false, 00:08:45.494 "num_base_bdevs": 3, 00:08:45.494 "num_base_bdevs_discovered": 1, 00:08:45.494 "num_base_bdevs_operational": 3, 00:08:45.494 "base_bdevs_list": [ 00:08:45.494 { 00:08:45.494 "name": "BaseBdev1", 00:08:45.494 "uuid": "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b", 00:08:45.494 "is_configured": true, 00:08:45.494 "data_offset": 0, 00:08:45.494 "data_size": 65536 00:08:45.494 }, 00:08:45.494 { 00:08:45.494 "name": "BaseBdev2", 00:08:45.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.494 "is_configured": false, 00:08:45.494 "data_offset": 0, 00:08:45.494 "data_size": 0 00:08:45.494 }, 00:08:45.494 { 00:08:45.494 "name": "BaseBdev3", 00:08:45.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.494 "is_configured": false, 00:08:45.494 "data_offset": 0, 00:08:45.494 "data_size": 0 00:08:45.494 } 00:08:45.494 ] 00:08:45.494 }' 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.494 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.754 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.754 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.755 [2024-11-26 21:16:03.811344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.755 [2024-11-26 21:16:03.811444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.755 [2024-11-26 21:16:03.823353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.755 [2024-11-26 21:16:03.825140] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.755 [2024-11-26 21:16:03.825181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.755 [2024-11-26 21:16:03.825203] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.755 [2024-11-26 21:16:03.825212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.755 "name": "Existed_Raid", 00:08:45.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.755 "strip_size_kb": 64, 00:08:45.755 "state": "configuring", 00:08:45.755 "raid_level": "concat", 00:08:45.755 "superblock": false, 00:08:45.755 "num_base_bdevs": 3, 00:08:45.755 "num_base_bdevs_discovered": 1, 00:08:45.755 "num_base_bdevs_operational": 3, 00:08:45.755 "base_bdevs_list": [ 00:08:45.755 { 00:08:45.755 "name": "BaseBdev1", 00:08:45.755 "uuid": "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b", 00:08:45.755 "is_configured": true, 00:08:45.755 "data_offset": 0, 00:08:45.755 "data_size": 65536 00:08:45.755 }, 00:08:45.755 { 00:08:45.755 "name": "BaseBdev2", 00:08:45.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.755 "is_configured": false, 00:08:45.755 "data_offset": 0, 00:08:45.755 "data_size": 0 00:08:45.755 }, 00:08:45.755 { 00:08:45.755 "name": "BaseBdev3", 00:08:45.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.755 "is_configured": false, 00:08:45.755 "data_offset": 0, 00:08:45.755 "data_size": 0 00:08:45.755 } 00:08:45.755 ] 00:08:45.755 }' 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.755 21:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.325 [2024-11-26 21:16:04.309604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.325 BaseBdev2 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.325 [ 00:08:46.325 { 00:08:46.325 "name": "BaseBdev2", 00:08:46.325 "aliases": [ 00:08:46.325 "65314428-34a7-4b5e-a37e-8258b7517483" 00:08:46.325 ], 00:08:46.325 "product_name": "Malloc disk", 00:08:46.325 "block_size": 512, 00:08:46.325 "num_blocks": 65536, 00:08:46.325 "uuid": "65314428-34a7-4b5e-a37e-8258b7517483", 00:08:46.325 "assigned_rate_limits": { 00:08:46.325 "rw_ios_per_sec": 0, 00:08:46.325 "rw_mbytes_per_sec": 0, 00:08:46.325 "r_mbytes_per_sec": 0, 00:08:46.325 "w_mbytes_per_sec": 0 00:08:46.325 }, 00:08:46.325 "claimed": true, 00:08:46.325 "claim_type": "exclusive_write", 00:08:46.325 "zoned": false, 00:08:46.325 "supported_io_types": { 00:08:46.325 "read": true, 00:08:46.325 "write": true, 00:08:46.325 "unmap": true, 00:08:46.325 "flush": true, 00:08:46.325 "reset": true, 00:08:46.325 "nvme_admin": false, 00:08:46.325 "nvme_io": false, 00:08:46.325 "nvme_io_md": false, 00:08:46.325 "write_zeroes": true, 00:08:46.325 "zcopy": true, 00:08:46.325 "get_zone_info": false, 00:08:46.325 "zone_management": false, 00:08:46.325 "zone_append": false, 00:08:46.325 "compare": false, 00:08:46.325 "compare_and_write": false, 00:08:46.325 "abort": true, 00:08:46.325 "seek_hole": false, 00:08:46.325 "seek_data": false, 00:08:46.325 "copy": true, 00:08:46.325 "nvme_iov_md": false 00:08:46.325 }, 00:08:46.325 "memory_domains": [ 00:08:46.325 { 00:08:46.325 "dma_device_id": "system", 00:08:46.325 "dma_device_type": 1 00:08:46.325 }, 00:08:46.325 { 00:08:46.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.325 "dma_device_type": 2 00:08:46.325 } 00:08:46.325 ], 00:08:46.325 "driver_specific": {} 00:08:46.325 } 00:08:46.325 ] 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.325 "name": "Existed_Raid", 00:08:46.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.325 "strip_size_kb": 64, 00:08:46.325 "state": "configuring", 00:08:46.325 "raid_level": "concat", 00:08:46.325 "superblock": false, 00:08:46.325 "num_base_bdevs": 3, 00:08:46.325 "num_base_bdevs_discovered": 2, 00:08:46.325 "num_base_bdevs_operational": 3, 00:08:46.325 "base_bdevs_list": [ 00:08:46.325 { 00:08:46.325 "name": "BaseBdev1", 00:08:46.325 "uuid": "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b", 00:08:46.325 "is_configured": true, 00:08:46.325 "data_offset": 0, 00:08:46.325 "data_size": 65536 00:08:46.325 }, 00:08:46.325 { 00:08:46.325 "name": "BaseBdev2", 00:08:46.325 "uuid": "65314428-34a7-4b5e-a37e-8258b7517483", 00:08:46.325 "is_configured": true, 00:08:46.325 "data_offset": 0, 00:08:46.325 "data_size": 65536 00:08:46.325 }, 00:08:46.325 { 00:08:46.325 "name": "BaseBdev3", 00:08:46.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.325 "is_configured": false, 00:08:46.325 "data_offset": 0, 00:08:46.325 "data_size": 0 00:08:46.325 } 00:08:46.325 ] 00:08:46.325 }' 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.325 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.895 [2024-11-26 21:16:04.814613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.895 [2024-11-26 21:16:04.814724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:46.895 [2024-11-26 21:16:04.814755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:46.895 [2024-11-26 21:16:04.815075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:46.895 [2024-11-26 21:16:04.815302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:46.895 [2024-11-26 21:16:04.815348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:46.895 [2024-11-26 21:16:04.815652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.895 BaseBdev3 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.895 [ 00:08:46.895 { 00:08:46.895 "name": "BaseBdev3", 00:08:46.895 "aliases": [ 00:08:46.895 "68df8d4e-2fac-4fde-a533-c32fb329d466" 00:08:46.895 ], 00:08:46.895 "product_name": "Malloc disk", 00:08:46.895 "block_size": 512, 00:08:46.895 "num_blocks": 65536, 00:08:46.895 "uuid": "68df8d4e-2fac-4fde-a533-c32fb329d466", 00:08:46.895 "assigned_rate_limits": { 00:08:46.895 "rw_ios_per_sec": 0, 00:08:46.895 "rw_mbytes_per_sec": 0, 00:08:46.895 "r_mbytes_per_sec": 0, 00:08:46.895 "w_mbytes_per_sec": 0 00:08:46.895 }, 00:08:46.895 "claimed": true, 00:08:46.895 "claim_type": "exclusive_write", 00:08:46.895 "zoned": false, 00:08:46.895 "supported_io_types": { 00:08:46.895 "read": true, 00:08:46.895 "write": true, 00:08:46.895 "unmap": true, 00:08:46.895 "flush": true, 00:08:46.895 "reset": true, 00:08:46.895 "nvme_admin": false, 00:08:46.895 "nvme_io": false, 00:08:46.895 "nvme_io_md": false, 00:08:46.895 "write_zeroes": true, 00:08:46.895 "zcopy": true, 00:08:46.895 "get_zone_info": false, 00:08:46.895 "zone_management": false, 00:08:46.895 "zone_append": false, 00:08:46.895 "compare": false, 00:08:46.895 "compare_and_write": false, 00:08:46.895 "abort": true, 00:08:46.895 "seek_hole": false, 00:08:46.895 "seek_data": false, 00:08:46.895 "copy": true, 00:08:46.895 "nvme_iov_md": false 00:08:46.895 }, 00:08:46.895 "memory_domains": [ 00:08:46.895 { 00:08:46.895 "dma_device_id": "system", 00:08:46.895 "dma_device_type": 1 00:08:46.895 }, 00:08:46.895 { 00:08:46.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.895 "dma_device_type": 2 00:08:46.895 } 00:08:46.895 ], 00:08:46.895 "driver_specific": {} 00:08:46.895 } 00:08:46.895 ] 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.895 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.896 "name": "Existed_Raid", 00:08:46.896 "uuid": "f584e231-984b-4220-91ba-a5b9983e7883", 00:08:46.896 "strip_size_kb": 64, 00:08:46.896 "state": "online", 00:08:46.896 "raid_level": "concat", 00:08:46.896 "superblock": false, 00:08:46.896 "num_base_bdevs": 3, 00:08:46.896 "num_base_bdevs_discovered": 3, 00:08:46.896 "num_base_bdevs_operational": 3, 00:08:46.896 "base_bdevs_list": [ 00:08:46.896 { 00:08:46.896 "name": "BaseBdev1", 00:08:46.896 "uuid": "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b", 00:08:46.896 "is_configured": true, 00:08:46.896 "data_offset": 0, 00:08:46.896 "data_size": 65536 00:08:46.896 }, 00:08:46.896 { 00:08:46.896 "name": "BaseBdev2", 00:08:46.896 "uuid": "65314428-34a7-4b5e-a37e-8258b7517483", 00:08:46.896 "is_configured": true, 00:08:46.896 "data_offset": 0, 00:08:46.896 "data_size": 65536 00:08:46.896 }, 00:08:46.896 { 00:08:46.896 "name": "BaseBdev3", 00:08:46.896 "uuid": "68df8d4e-2fac-4fde-a533-c32fb329d466", 00:08:46.896 "is_configured": true, 00:08:46.896 "data_offset": 0, 00:08:46.896 "data_size": 65536 00:08:46.896 } 00:08:46.896 ] 00:08:46.896 }' 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.896 21:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.157 [2024-11-26 21:16:05.286115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.157 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.418 "name": "Existed_Raid", 00:08:47.418 "aliases": [ 00:08:47.418 "f584e231-984b-4220-91ba-a5b9983e7883" 00:08:47.418 ], 00:08:47.418 "product_name": "Raid Volume", 00:08:47.418 "block_size": 512, 00:08:47.418 "num_blocks": 196608, 00:08:47.418 "uuid": "f584e231-984b-4220-91ba-a5b9983e7883", 00:08:47.418 "assigned_rate_limits": { 00:08:47.418 "rw_ios_per_sec": 0, 00:08:47.418 "rw_mbytes_per_sec": 0, 00:08:47.418 "r_mbytes_per_sec": 0, 00:08:47.418 "w_mbytes_per_sec": 0 00:08:47.418 }, 00:08:47.418 "claimed": false, 00:08:47.418 "zoned": false, 00:08:47.418 "supported_io_types": { 00:08:47.418 "read": true, 00:08:47.418 "write": true, 00:08:47.418 "unmap": true, 00:08:47.418 "flush": true, 00:08:47.418 "reset": true, 00:08:47.418 "nvme_admin": false, 00:08:47.418 "nvme_io": false, 00:08:47.418 "nvme_io_md": false, 00:08:47.418 "write_zeroes": true, 00:08:47.418 "zcopy": false, 00:08:47.418 "get_zone_info": false, 00:08:47.418 "zone_management": false, 00:08:47.418 "zone_append": false, 00:08:47.418 "compare": false, 00:08:47.418 "compare_and_write": false, 00:08:47.418 "abort": false, 00:08:47.418 "seek_hole": false, 00:08:47.418 "seek_data": false, 00:08:47.418 "copy": false, 00:08:47.418 "nvme_iov_md": false 00:08:47.418 }, 00:08:47.418 "memory_domains": [ 00:08:47.418 { 00:08:47.418 "dma_device_id": "system", 00:08:47.418 "dma_device_type": 1 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.418 "dma_device_type": 2 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "dma_device_id": "system", 00:08:47.418 "dma_device_type": 1 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.418 "dma_device_type": 2 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "dma_device_id": "system", 00:08:47.418 "dma_device_type": 1 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.418 "dma_device_type": 2 00:08:47.418 } 00:08:47.418 ], 00:08:47.418 "driver_specific": { 00:08:47.418 "raid": { 00:08:47.418 "uuid": "f584e231-984b-4220-91ba-a5b9983e7883", 00:08:47.418 "strip_size_kb": 64, 00:08:47.418 "state": "online", 00:08:47.418 "raid_level": "concat", 00:08:47.418 "superblock": false, 00:08:47.418 "num_base_bdevs": 3, 00:08:47.418 "num_base_bdevs_discovered": 3, 00:08:47.418 "num_base_bdevs_operational": 3, 00:08:47.418 "base_bdevs_list": [ 00:08:47.418 { 00:08:47.418 "name": "BaseBdev1", 00:08:47.418 "uuid": "c6ad8edb-036a-4fe3-9b1e-bf2f15398c0b", 00:08:47.418 "is_configured": true, 00:08:47.418 "data_offset": 0, 00:08:47.418 "data_size": 65536 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "name": "BaseBdev2", 00:08:47.418 "uuid": "65314428-34a7-4b5e-a37e-8258b7517483", 00:08:47.418 "is_configured": true, 00:08:47.418 "data_offset": 0, 00:08:47.418 "data_size": 65536 00:08:47.418 }, 00:08:47.418 { 00:08:47.418 "name": "BaseBdev3", 00:08:47.418 "uuid": "68df8d4e-2fac-4fde-a533-c32fb329d466", 00:08:47.418 "is_configured": true, 00:08:47.418 "data_offset": 0, 00:08:47.418 "data_size": 65536 00:08:47.418 } 00:08:47.418 ] 00:08:47.418 } 00:08:47.418 } 00:08:47.418 }' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.418 BaseBdev2 00:08:47.418 BaseBdev3' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.418 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.418 [2024-11-26 21:16:05.545477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.418 [2024-11-26 21:16:05.545507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.418 [2024-11-26 21:16:05.545565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.678 "name": "Existed_Raid", 00:08:47.678 "uuid": "f584e231-984b-4220-91ba-a5b9983e7883", 00:08:47.678 "strip_size_kb": 64, 00:08:47.678 "state": "offline", 00:08:47.678 "raid_level": "concat", 00:08:47.678 "superblock": false, 00:08:47.678 "num_base_bdevs": 3, 00:08:47.678 "num_base_bdevs_discovered": 2, 00:08:47.678 "num_base_bdevs_operational": 2, 00:08:47.678 "base_bdevs_list": [ 00:08:47.678 { 00:08:47.678 "name": null, 00:08:47.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.678 "is_configured": false, 00:08:47.678 "data_offset": 0, 00:08:47.678 "data_size": 65536 00:08:47.678 }, 00:08:47.678 { 00:08:47.678 "name": "BaseBdev2", 00:08:47.678 "uuid": "65314428-34a7-4b5e-a37e-8258b7517483", 00:08:47.678 "is_configured": true, 00:08:47.678 "data_offset": 0, 00:08:47.678 "data_size": 65536 00:08:47.678 }, 00:08:47.678 { 00:08:47.678 "name": "BaseBdev3", 00:08:47.678 "uuid": "68df8d4e-2fac-4fde-a533-c32fb329d466", 00:08:47.678 "is_configured": true, 00:08:47.678 "data_offset": 0, 00:08:47.678 "data_size": 65536 00:08:47.678 } 00:08:47.678 ] 00:08:47.678 }' 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.678 21:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.247 [2024-11-26 21:16:06.163919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.247 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.247 [2024-11-26 21:16:06.315092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.247 [2024-11-26 21:16:06.315221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 BaseBdev2 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.508 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 [ 00:08:48.508 { 00:08:48.508 "name": "BaseBdev2", 00:08:48.508 "aliases": [ 00:08:48.508 "355bd7c5-9adf-4123-b3c6-651d3969cdc8" 00:08:48.508 ], 00:08:48.508 "product_name": "Malloc disk", 00:08:48.508 "block_size": 512, 00:08:48.508 "num_blocks": 65536, 00:08:48.508 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:48.508 "assigned_rate_limits": { 00:08:48.508 "rw_ios_per_sec": 0, 00:08:48.508 "rw_mbytes_per_sec": 0, 00:08:48.508 "r_mbytes_per_sec": 0, 00:08:48.508 "w_mbytes_per_sec": 0 00:08:48.508 }, 00:08:48.508 "claimed": false, 00:08:48.508 "zoned": false, 00:08:48.508 "supported_io_types": { 00:08:48.508 "read": true, 00:08:48.508 "write": true, 00:08:48.508 "unmap": true, 00:08:48.508 "flush": true, 00:08:48.508 "reset": true, 00:08:48.508 "nvme_admin": false, 00:08:48.508 "nvme_io": false, 00:08:48.508 "nvme_io_md": false, 00:08:48.508 "write_zeroes": true, 00:08:48.508 "zcopy": true, 00:08:48.508 "get_zone_info": false, 00:08:48.508 "zone_management": false, 00:08:48.508 "zone_append": false, 00:08:48.508 "compare": false, 00:08:48.508 "compare_and_write": false, 00:08:48.508 "abort": true, 00:08:48.508 "seek_hole": false, 00:08:48.508 "seek_data": false, 00:08:48.508 "copy": true, 00:08:48.508 "nvme_iov_md": false 00:08:48.508 }, 00:08:48.508 "memory_domains": [ 00:08:48.508 { 00:08:48.508 "dma_device_id": "system", 00:08:48.508 "dma_device_type": 1 00:08:48.508 }, 00:08:48.508 { 00:08:48.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.508 "dma_device_type": 2 00:08:48.508 } 00:08:48.508 ], 00:08:48.508 "driver_specific": {} 00:08:48.508 } 00:08:48.508 ] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 BaseBdev3 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 [ 00:08:48.509 { 00:08:48.509 "name": "BaseBdev3", 00:08:48.509 "aliases": [ 00:08:48.509 "457e81b1-26ef-42aa-842a-2057335e791e" 00:08:48.509 ], 00:08:48.509 "product_name": "Malloc disk", 00:08:48.509 "block_size": 512, 00:08:48.509 "num_blocks": 65536, 00:08:48.509 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:48.509 "assigned_rate_limits": { 00:08:48.509 "rw_ios_per_sec": 0, 00:08:48.509 "rw_mbytes_per_sec": 0, 00:08:48.509 "r_mbytes_per_sec": 0, 00:08:48.509 "w_mbytes_per_sec": 0 00:08:48.509 }, 00:08:48.509 "claimed": false, 00:08:48.509 "zoned": false, 00:08:48.509 "supported_io_types": { 00:08:48.509 "read": true, 00:08:48.509 "write": true, 00:08:48.509 "unmap": true, 00:08:48.509 "flush": true, 00:08:48.509 "reset": true, 00:08:48.509 "nvme_admin": false, 00:08:48.509 "nvme_io": false, 00:08:48.509 "nvme_io_md": false, 00:08:48.509 "write_zeroes": true, 00:08:48.509 "zcopy": true, 00:08:48.509 "get_zone_info": false, 00:08:48.509 "zone_management": false, 00:08:48.509 "zone_append": false, 00:08:48.509 "compare": false, 00:08:48.509 "compare_and_write": false, 00:08:48.509 "abort": true, 00:08:48.509 "seek_hole": false, 00:08:48.509 "seek_data": false, 00:08:48.509 "copy": true, 00:08:48.509 "nvme_iov_md": false 00:08:48.509 }, 00:08:48.509 "memory_domains": [ 00:08:48.509 { 00:08:48.509 "dma_device_id": "system", 00:08:48.509 "dma_device_type": 1 00:08:48.509 }, 00:08:48.509 { 00:08:48.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.509 "dma_device_type": 2 00:08:48.509 } 00:08:48.509 ], 00:08:48.509 "driver_specific": {} 00:08:48.509 } 00:08:48.509 ] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 [2024-11-26 21:16:06.621200] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.509 [2024-11-26 21:16:06.621296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.509 [2024-11-26 21:16:06.621340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.509 [2024-11-26 21:16:06.623305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.769 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.769 "name": "Existed_Raid", 00:08:48.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.769 "strip_size_kb": 64, 00:08:48.769 "state": "configuring", 00:08:48.769 "raid_level": "concat", 00:08:48.769 "superblock": false, 00:08:48.769 "num_base_bdevs": 3, 00:08:48.769 "num_base_bdevs_discovered": 2, 00:08:48.769 "num_base_bdevs_operational": 3, 00:08:48.769 "base_bdevs_list": [ 00:08:48.769 { 00:08:48.769 "name": "BaseBdev1", 00:08:48.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.769 "is_configured": false, 00:08:48.769 "data_offset": 0, 00:08:48.769 "data_size": 0 00:08:48.769 }, 00:08:48.769 { 00:08:48.769 "name": "BaseBdev2", 00:08:48.769 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:48.769 "is_configured": true, 00:08:48.769 "data_offset": 0, 00:08:48.769 "data_size": 65536 00:08:48.769 }, 00:08:48.769 { 00:08:48.769 "name": "BaseBdev3", 00:08:48.769 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:48.769 "is_configured": true, 00:08:48.769 "data_offset": 0, 00:08:48.769 "data_size": 65536 00:08:48.769 } 00:08:48.769 ] 00:08:48.769 }' 00:08:48.769 21:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.769 21:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 [2024-11-26 21:16:07.080464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.029 "name": "Existed_Raid", 00:08:49.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.029 "strip_size_kb": 64, 00:08:49.029 "state": "configuring", 00:08:49.029 "raid_level": "concat", 00:08:49.029 "superblock": false, 00:08:49.029 "num_base_bdevs": 3, 00:08:49.029 "num_base_bdevs_discovered": 1, 00:08:49.030 "num_base_bdevs_operational": 3, 00:08:49.030 "base_bdevs_list": [ 00:08:49.030 { 00:08:49.030 "name": "BaseBdev1", 00:08:49.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.030 "is_configured": false, 00:08:49.030 "data_offset": 0, 00:08:49.030 "data_size": 0 00:08:49.030 }, 00:08:49.030 { 00:08:49.030 "name": null, 00:08:49.030 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:49.030 "is_configured": false, 00:08:49.030 "data_offset": 0, 00:08:49.030 "data_size": 65536 00:08:49.030 }, 00:08:49.030 { 00:08:49.030 "name": "BaseBdev3", 00:08:49.030 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:49.030 "is_configured": true, 00:08:49.030 "data_offset": 0, 00:08:49.030 "data_size": 65536 00:08:49.030 } 00:08:49.030 ] 00:08:49.030 }' 00:08:49.030 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.030 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.599 [2024-11-26 21:16:07.643603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.599 BaseBdev1 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.599 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.599 [ 00:08:49.599 { 00:08:49.599 "name": "BaseBdev1", 00:08:49.599 "aliases": [ 00:08:49.599 "2f4ecbad-e15c-411c-b710-b8b8d67281cf" 00:08:49.599 ], 00:08:49.599 "product_name": "Malloc disk", 00:08:49.599 "block_size": 512, 00:08:49.599 "num_blocks": 65536, 00:08:49.599 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:49.599 "assigned_rate_limits": { 00:08:49.599 "rw_ios_per_sec": 0, 00:08:49.599 "rw_mbytes_per_sec": 0, 00:08:49.599 "r_mbytes_per_sec": 0, 00:08:49.599 "w_mbytes_per_sec": 0 00:08:49.599 }, 00:08:49.599 "claimed": true, 00:08:49.599 "claim_type": "exclusive_write", 00:08:49.599 "zoned": false, 00:08:49.599 "supported_io_types": { 00:08:49.599 "read": true, 00:08:49.599 "write": true, 00:08:49.599 "unmap": true, 00:08:49.599 "flush": true, 00:08:49.599 "reset": true, 00:08:49.599 "nvme_admin": false, 00:08:49.599 "nvme_io": false, 00:08:49.599 "nvme_io_md": false, 00:08:49.599 "write_zeroes": true, 00:08:49.599 "zcopy": true, 00:08:49.599 "get_zone_info": false, 00:08:49.599 "zone_management": false, 00:08:49.599 "zone_append": false, 00:08:49.599 "compare": false, 00:08:49.599 "compare_and_write": false, 00:08:49.599 "abort": true, 00:08:49.599 "seek_hole": false, 00:08:49.599 "seek_data": false, 00:08:49.599 "copy": true, 00:08:49.599 "nvme_iov_md": false 00:08:49.599 }, 00:08:49.599 "memory_domains": [ 00:08:49.599 { 00:08:49.599 "dma_device_id": "system", 00:08:49.599 "dma_device_type": 1 00:08:49.599 }, 00:08:49.600 { 00:08:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.600 "dma_device_type": 2 00:08:49.600 } 00:08:49.600 ], 00:08:49.600 "driver_specific": {} 00:08:49.600 } 00:08:49.600 ] 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.600 "name": "Existed_Raid", 00:08:49.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.600 "strip_size_kb": 64, 00:08:49.600 "state": "configuring", 00:08:49.600 "raid_level": "concat", 00:08:49.600 "superblock": false, 00:08:49.600 "num_base_bdevs": 3, 00:08:49.600 "num_base_bdevs_discovered": 2, 00:08:49.600 "num_base_bdevs_operational": 3, 00:08:49.600 "base_bdevs_list": [ 00:08:49.600 { 00:08:49.600 "name": "BaseBdev1", 00:08:49.600 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:49.600 "is_configured": true, 00:08:49.600 "data_offset": 0, 00:08:49.600 "data_size": 65536 00:08:49.600 }, 00:08:49.600 { 00:08:49.600 "name": null, 00:08:49.600 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:49.600 "is_configured": false, 00:08:49.600 "data_offset": 0, 00:08:49.600 "data_size": 65536 00:08:49.600 }, 00:08:49.600 { 00:08:49.600 "name": "BaseBdev3", 00:08:49.600 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:49.600 "is_configured": true, 00:08:49.600 "data_offset": 0, 00:08:49.600 "data_size": 65536 00:08:49.600 } 00:08:49.600 ] 00:08:49.600 }' 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.600 21:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.170 [2024-11-26 21:16:08.094868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.170 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.170 "name": "Existed_Raid", 00:08:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.170 "strip_size_kb": 64, 00:08:50.170 "state": "configuring", 00:08:50.170 "raid_level": "concat", 00:08:50.170 "superblock": false, 00:08:50.171 "num_base_bdevs": 3, 00:08:50.171 "num_base_bdevs_discovered": 1, 00:08:50.171 "num_base_bdevs_operational": 3, 00:08:50.171 "base_bdevs_list": [ 00:08:50.171 { 00:08:50.171 "name": "BaseBdev1", 00:08:50.171 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:50.171 "is_configured": true, 00:08:50.171 "data_offset": 0, 00:08:50.171 "data_size": 65536 00:08:50.171 }, 00:08:50.171 { 00:08:50.171 "name": null, 00:08:50.171 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:50.171 "is_configured": false, 00:08:50.171 "data_offset": 0, 00:08:50.171 "data_size": 65536 00:08:50.171 }, 00:08:50.171 { 00:08:50.171 "name": null, 00:08:50.171 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:50.171 "is_configured": false, 00:08:50.171 "data_offset": 0, 00:08:50.171 "data_size": 65536 00:08:50.171 } 00:08:50.171 ] 00:08:50.171 }' 00:08:50.171 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.171 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.435 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.435 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.435 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.435 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.435 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.435 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.705 [2024-11-26 21:16:08.594040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.705 "name": "Existed_Raid", 00:08:50.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.705 "strip_size_kb": 64, 00:08:50.705 "state": "configuring", 00:08:50.705 "raid_level": "concat", 00:08:50.705 "superblock": false, 00:08:50.705 "num_base_bdevs": 3, 00:08:50.705 "num_base_bdevs_discovered": 2, 00:08:50.705 "num_base_bdevs_operational": 3, 00:08:50.705 "base_bdevs_list": [ 00:08:50.705 { 00:08:50.705 "name": "BaseBdev1", 00:08:50.705 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:50.705 "is_configured": true, 00:08:50.705 "data_offset": 0, 00:08:50.705 "data_size": 65536 00:08:50.705 }, 00:08:50.705 { 00:08:50.705 "name": null, 00:08:50.705 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:50.705 "is_configured": false, 00:08:50.705 "data_offset": 0, 00:08:50.705 "data_size": 65536 00:08:50.705 }, 00:08:50.705 { 00:08:50.705 "name": "BaseBdev3", 00:08:50.705 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:50.705 "is_configured": true, 00:08:50.705 "data_offset": 0, 00:08:50.705 "data_size": 65536 00:08:50.705 } 00:08:50.705 ] 00:08:50.705 }' 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.705 21:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.965 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.965 [2024-11-26 21:16:09.073232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.224 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.224 "name": "Existed_Raid", 00:08:51.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.224 "strip_size_kb": 64, 00:08:51.224 "state": "configuring", 00:08:51.224 "raid_level": "concat", 00:08:51.224 "superblock": false, 00:08:51.224 "num_base_bdevs": 3, 00:08:51.224 "num_base_bdevs_discovered": 1, 00:08:51.224 "num_base_bdevs_operational": 3, 00:08:51.224 "base_bdevs_list": [ 00:08:51.224 { 00:08:51.224 "name": null, 00:08:51.224 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:51.224 "is_configured": false, 00:08:51.224 "data_offset": 0, 00:08:51.224 "data_size": 65536 00:08:51.224 }, 00:08:51.224 { 00:08:51.224 "name": null, 00:08:51.224 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:51.224 "is_configured": false, 00:08:51.224 "data_offset": 0, 00:08:51.225 "data_size": 65536 00:08:51.225 }, 00:08:51.225 { 00:08:51.225 "name": "BaseBdev3", 00:08:51.225 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:51.225 "is_configured": true, 00:08:51.225 "data_offset": 0, 00:08:51.225 "data_size": 65536 00:08:51.225 } 00:08:51.225 ] 00:08:51.225 }' 00:08:51.225 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.225 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.484 [2024-11-26 21:16:09.593085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.484 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.743 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.743 "name": "Existed_Raid", 00:08:51.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.744 "strip_size_kb": 64, 00:08:51.744 "state": "configuring", 00:08:51.744 "raid_level": "concat", 00:08:51.744 "superblock": false, 00:08:51.744 "num_base_bdevs": 3, 00:08:51.744 "num_base_bdevs_discovered": 2, 00:08:51.744 "num_base_bdevs_operational": 3, 00:08:51.744 "base_bdevs_list": [ 00:08:51.744 { 00:08:51.744 "name": null, 00:08:51.744 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:51.744 "is_configured": false, 00:08:51.744 "data_offset": 0, 00:08:51.744 "data_size": 65536 00:08:51.744 }, 00:08:51.744 { 00:08:51.744 "name": "BaseBdev2", 00:08:51.744 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:51.744 "is_configured": true, 00:08:51.744 "data_offset": 0, 00:08:51.744 "data_size": 65536 00:08:51.744 }, 00:08:51.744 { 00:08:51.744 "name": "BaseBdev3", 00:08:51.744 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:51.744 "is_configured": true, 00:08:51.744 "data_offset": 0, 00:08:51.744 "data_size": 65536 00:08:51.744 } 00:08:51.744 ] 00:08:51.744 }' 00:08:51.744 21:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.744 21:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:52.003 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.004 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2f4ecbad-e15c-411c-b710-b8b8d67281cf 00:08:52.004 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.004 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.263 [2024-11-26 21:16:10.160831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:52.263 [2024-11-26 21:16:10.160927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:52.263 [2024-11-26 21:16:10.160943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:52.263 [2024-11-26 21:16:10.161211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.263 [2024-11-26 21:16:10.161373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:52.263 [2024-11-26 21:16:10.161384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:52.263 [2024-11-26 21:16:10.161644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.263 NewBaseBdev 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.263 [ 00:08:52.263 { 00:08:52.263 "name": "NewBaseBdev", 00:08:52.263 "aliases": [ 00:08:52.263 "2f4ecbad-e15c-411c-b710-b8b8d67281cf" 00:08:52.263 ], 00:08:52.263 "product_name": "Malloc disk", 00:08:52.263 "block_size": 512, 00:08:52.263 "num_blocks": 65536, 00:08:52.263 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:52.263 "assigned_rate_limits": { 00:08:52.263 "rw_ios_per_sec": 0, 00:08:52.263 "rw_mbytes_per_sec": 0, 00:08:52.263 "r_mbytes_per_sec": 0, 00:08:52.263 "w_mbytes_per_sec": 0 00:08:52.263 }, 00:08:52.263 "claimed": true, 00:08:52.263 "claim_type": "exclusive_write", 00:08:52.263 "zoned": false, 00:08:52.263 "supported_io_types": { 00:08:52.263 "read": true, 00:08:52.263 "write": true, 00:08:52.263 "unmap": true, 00:08:52.263 "flush": true, 00:08:52.263 "reset": true, 00:08:52.263 "nvme_admin": false, 00:08:52.263 "nvme_io": false, 00:08:52.263 "nvme_io_md": false, 00:08:52.263 "write_zeroes": true, 00:08:52.263 "zcopy": true, 00:08:52.263 "get_zone_info": false, 00:08:52.263 "zone_management": false, 00:08:52.263 "zone_append": false, 00:08:52.263 "compare": false, 00:08:52.263 "compare_and_write": false, 00:08:52.263 "abort": true, 00:08:52.263 "seek_hole": false, 00:08:52.263 "seek_data": false, 00:08:52.263 "copy": true, 00:08:52.263 "nvme_iov_md": false 00:08:52.263 }, 00:08:52.263 "memory_domains": [ 00:08:52.263 { 00:08:52.263 "dma_device_id": "system", 00:08:52.263 "dma_device_type": 1 00:08:52.263 }, 00:08:52.263 { 00:08:52.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.263 "dma_device_type": 2 00:08:52.263 } 00:08:52.263 ], 00:08:52.263 "driver_specific": {} 00:08:52.263 } 00:08:52.263 ] 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:52.263 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.264 "name": "Existed_Raid", 00:08:52.264 "uuid": "db285d82-b62b-4267-9a8b-6f92371d96ff", 00:08:52.264 "strip_size_kb": 64, 00:08:52.264 "state": "online", 00:08:52.264 "raid_level": "concat", 00:08:52.264 "superblock": false, 00:08:52.264 "num_base_bdevs": 3, 00:08:52.264 "num_base_bdevs_discovered": 3, 00:08:52.264 "num_base_bdevs_operational": 3, 00:08:52.264 "base_bdevs_list": [ 00:08:52.264 { 00:08:52.264 "name": "NewBaseBdev", 00:08:52.264 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:52.264 "is_configured": true, 00:08:52.264 "data_offset": 0, 00:08:52.264 "data_size": 65536 00:08:52.264 }, 00:08:52.264 { 00:08:52.264 "name": "BaseBdev2", 00:08:52.264 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:52.264 "is_configured": true, 00:08:52.264 "data_offset": 0, 00:08:52.264 "data_size": 65536 00:08:52.264 }, 00:08:52.264 { 00:08:52.264 "name": "BaseBdev3", 00:08:52.264 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:52.264 "is_configured": true, 00:08:52.264 "data_offset": 0, 00:08:52.264 "data_size": 65536 00:08:52.264 } 00:08:52.264 ] 00:08:52.264 }' 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.264 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.523 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.523 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.523 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.523 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.523 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.523 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.524 [2024-11-26 21:16:10.628353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.524 "name": "Existed_Raid", 00:08:52.524 "aliases": [ 00:08:52.524 "db285d82-b62b-4267-9a8b-6f92371d96ff" 00:08:52.524 ], 00:08:52.524 "product_name": "Raid Volume", 00:08:52.524 "block_size": 512, 00:08:52.524 "num_blocks": 196608, 00:08:52.524 "uuid": "db285d82-b62b-4267-9a8b-6f92371d96ff", 00:08:52.524 "assigned_rate_limits": { 00:08:52.524 "rw_ios_per_sec": 0, 00:08:52.524 "rw_mbytes_per_sec": 0, 00:08:52.524 "r_mbytes_per_sec": 0, 00:08:52.524 "w_mbytes_per_sec": 0 00:08:52.524 }, 00:08:52.524 "claimed": false, 00:08:52.524 "zoned": false, 00:08:52.524 "supported_io_types": { 00:08:52.524 "read": true, 00:08:52.524 "write": true, 00:08:52.524 "unmap": true, 00:08:52.524 "flush": true, 00:08:52.524 "reset": true, 00:08:52.524 "nvme_admin": false, 00:08:52.524 "nvme_io": false, 00:08:52.524 "nvme_io_md": false, 00:08:52.524 "write_zeroes": true, 00:08:52.524 "zcopy": false, 00:08:52.524 "get_zone_info": false, 00:08:52.524 "zone_management": false, 00:08:52.524 "zone_append": false, 00:08:52.524 "compare": false, 00:08:52.524 "compare_and_write": false, 00:08:52.524 "abort": false, 00:08:52.524 "seek_hole": false, 00:08:52.524 "seek_data": false, 00:08:52.524 "copy": false, 00:08:52.524 "nvme_iov_md": false 00:08:52.524 }, 00:08:52.524 "memory_domains": [ 00:08:52.524 { 00:08:52.524 "dma_device_id": "system", 00:08:52.524 "dma_device_type": 1 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.524 "dma_device_type": 2 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "dma_device_id": "system", 00:08:52.524 "dma_device_type": 1 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.524 "dma_device_type": 2 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "dma_device_id": "system", 00:08:52.524 "dma_device_type": 1 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.524 "dma_device_type": 2 00:08:52.524 } 00:08:52.524 ], 00:08:52.524 "driver_specific": { 00:08:52.524 "raid": { 00:08:52.524 "uuid": "db285d82-b62b-4267-9a8b-6f92371d96ff", 00:08:52.524 "strip_size_kb": 64, 00:08:52.524 "state": "online", 00:08:52.524 "raid_level": "concat", 00:08:52.524 "superblock": false, 00:08:52.524 "num_base_bdevs": 3, 00:08:52.524 "num_base_bdevs_discovered": 3, 00:08:52.524 "num_base_bdevs_operational": 3, 00:08:52.524 "base_bdevs_list": [ 00:08:52.524 { 00:08:52.524 "name": "NewBaseBdev", 00:08:52.524 "uuid": "2f4ecbad-e15c-411c-b710-b8b8d67281cf", 00:08:52.524 "is_configured": true, 00:08:52.524 "data_offset": 0, 00:08:52.524 "data_size": 65536 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "name": "BaseBdev2", 00:08:52.524 "uuid": "355bd7c5-9adf-4123-b3c6-651d3969cdc8", 00:08:52.524 "is_configured": true, 00:08:52.524 "data_offset": 0, 00:08:52.524 "data_size": 65536 00:08:52.524 }, 00:08:52.524 { 00:08:52.524 "name": "BaseBdev3", 00:08:52.524 "uuid": "457e81b1-26ef-42aa-842a-2057335e791e", 00:08:52.524 "is_configured": true, 00:08:52.524 "data_offset": 0, 00:08:52.524 "data_size": 65536 00:08:52.524 } 00:08:52.524 ] 00:08:52.524 } 00:08:52.524 } 00:08:52.524 }' 00:08:52.524 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:52.784 BaseBdev2 00:08:52.784 BaseBdev3' 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.784 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.785 [2024-11-26 21:16:10.895603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.785 [2024-11-26 21:16:10.895630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.785 [2024-11-26 21:16:10.895702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.785 [2024-11-26 21:16:10.895765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.785 [2024-11-26 21:16:10.895778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65451 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65451 ']' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65451 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65451 00:08:52.785 killing process with pid 65451 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65451' 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65451 00:08:52.785 [2024-11-26 21:16:10.930498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.785 21:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65451 00:08:53.355 [2024-11-26 21:16:11.217274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.294 00:08:54.294 real 0m10.360s 00:08:54.294 user 0m16.553s 00:08:54.294 sys 0m1.785s 00:08:54.294 ************************************ 00:08:54.294 END TEST raid_state_function_test 00:08:54.294 ************************************ 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.294 21:16:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:54.294 21:16:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.294 21:16:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.294 21:16:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.294 ************************************ 00:08:54.294 START TEST raid_state_function_test_sb 00:08:54.294 ************************************ 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66078 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66078' 00:08:54.294 Process raid pid: 66078 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66078 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66078 ']' 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.294 21:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.294 [2024-11-26 21:16:12.435001] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:54.294 [2024-11-26 21:16:12.435206] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.553 [2024-11-26 21:16:12.608438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.814 [2024-11-26 21:16:12.720120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.814 [2024-11-26 21:16:12.907536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.814 [2024-11-26 21:16:12.907576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.383 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.383 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:55.383 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.383 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.383 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.383 [2024-11-26 21:16:13.268093] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.383 [2024-11-26 21:16:13.268144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.383 [2024-11-26 21:16:13.268155] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.383 [2024-11-26 21:16:13.268165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.384 [2024-11-26 21:16:13.268171] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.384 [2024-11-26 21:16:13.268180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.384 "name": "Existed_Raid", 00:08:55.384 "uuid": "620ba01b-b0a2-481f-9d2e-295e3bca7ef4", 00:08:55.384 "strip_size_kb": 64, 00:08:55.384 "state": "configuring", 00:08:55.384 "raid_level": "concat", 00:08:55.384 "superblock": true, 00:08:55.384 "num_base_bdevs": 3, 00:08:55.384 "num_base_bdevs_discovered": 0, 00:08:55.384 "num_base_bdevs_operational": 3, 00:08:55.384 "base_bdevs_list": [ 00:08:55.384 { 00:08:55.384 "name": "BaseBdev1", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 }, 00:08:55.384 { 00:08:55.384 "name": "BaseBdev2", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 }, 00:08:55.384 { 00:08:55.384 "name": "BaseBdev3", 00:08:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.384 "is_configured": false, 00:08:55.384 "data_offset": 0, 00:08:55.384 "data_size": 0 00:08:55.384 } 00:08:55.384 ] 00:08:55.384 }' 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.384 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.644 [2024-11-26 21:16:13.687307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.644 [2024-11-26 21:16:13.687413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.644 [2024-11-26 21:16:13.699285] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.644 [2024-11-26 21:16:13.699369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.644 [2024-11-26 21:16:13.699397] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.644 [2024-11-26 21:16:13.699419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.644 [2024-11-26 21:16:13.699438] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.644 [2024-11-26 21:16:13.699458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.644 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.644 [2024-11-26 21:16:13.743956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.644 BaseBdev1 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.645 [ 00:08:55.645 { 00:08:55.645 "name": "BaseBdev1", 00:08:55.645 "aliases": [ 00:08:55.645 "6917388f-b5e1-488c-9bf1-cfa5c317a254" 00:08:55.645 ], 00:08:55.645 "product_name": "Malloc disk", 00:08:55.645 "block_size": 512, 00:08:55.645 "num_blocks": 65536, 00:08:55.645 "uuid": "6917388f-b5e1-488c-9bf1-cfa5c317a254", 00:08:55.645 "assigned_rate_limits": { 00:08:55.645 "rw_ios_per_sec": 0, 00:08:55.645 "rw_mbytes_per_sec": 0, 00:08:55.645 "r_mbytes_per_sec": 0, 00:08:55.645 "w_mbytes_per_sec": 0 00:08:55.645 }, 00:08:55.645 "claimed": true, 00:08:55.645 "claim_type": "exclusive_write", 00:08:55.645 "zoned": false, 00:08:55.645 "supported_io_types": { 00:08:55.645 "read": true, 00:08:55.645 "write": true, 00:08:55.645 "unmap": true, 00:08:55.645 "flush": true, 00:08:55.645 "reset": true, 00:08:55.645 "nvme_admin": false, 00:08:55.645 "nvme_io": false, 00:08:55.645 "nvme_io_md": false, 00:08:55.645 "write_zeroes": true, 00:08:55.645 "zcopy": true, 00:08:55.645 "get_zone_info": false, 00:08:55.645 "zone_management": false, 00:08:55.645 "zone_append": false, 00:08:55.645 "compare": false, 00:08:55.645 "compare_and_write": false, 00:08:55.645 "abort": true, 00:08:55.645 "seek_hole": false, 00:08:55.645 "seek_data": false, 00:08:55.645 "copy": true, 00:08:55.645 "nvme_iov_md": false 00:08:55.645 }, 00:08:55.645 "memory_domains": [ 00:08:55.645 { 00:08:55.645 "dma_device_id": "system", 00:08:55.645 "dma_device_type": 1 00:08:55.645 }, 00:08:55.645 { 00:08:55.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.645 "dma_device_type": 2 00:08:55.645 } 00:08:55.645 ], 00:08:55.645 "driver_specific": {} 00:08:55.645 } 00:08:55.645 ] 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.645 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.906 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.906 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.906 "name": "Existed_Raid", 00:08:55.906 "uuid": "97d530c1-faa3-4748-826f-66cc64554504", 00:08:55.906 "strip_size_kb": 64, 00:08:55.906 "state": "configuring", 00:08:55.906 "raid_level": "concat", 00:08:55.906 "superblock": true, 00:08:55.906 "num_base_bdevs": 3, 00:08:55.906 "num_base_bdevs_discovered": 1, 00:08:55.906 "num_base_bdevs_operational": 3, 00:08:55.906 "base_bdevs_list": [ 00:08:55.906 { 00:08:55.906 "name": "BaseBdev1", 00:08:55.906 "uuid": "6917388f-b5e1-488c-9bf1-cfa5c317a254", 00:08:55.906 "is_configured": true, 00:08:55.906 "data_offset": 2048, 00:08:55.906 "data_size": 63488 00:08:55.906 }, 00:08:55.906 { 00:08:55.906 "name": "BaseBdev2", 00:08:55.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.906 "is_configured": false, 00:08:55.906 "data_offset": 0, 00:08:55.906 "data_size": 0 00:08:55.906 }, 00:08:55.906 { 00:08:55.906 "name": "BaseBdev3", 00:08:55.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.906 "is_configured": false, 00:08:55.906 "data_offset": 0, 00:08:55.906 "data_size": 0 00:08:55.906 } 00:08:55.906 ] 00:08:55.906 }' 00:08:55.906 21:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.906 21:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.167 [2024-11-26 21:16:14.195315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.167 [2024-11-26 21:16:14.195435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.167 [2024-11-26 21:16:14.207328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.167 [2024-11-26 21:16:14.209108] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.167 [2024-11-26 21:16:14.209149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.167 [2024-11-26 21:16:14.209159] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.167 [2024-11-26 21:16:14.209168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.167 "name": "Existed_Raid", 00:08:56.167 "uuid": "6a535a5e-8ce5-44ee-96be-9d58053bd44a", 00:08:56.167 "strip_size_kb": 64, 00:08:56.167 "state": "configuring", 00:08:56.167 "raid_level": "concat", 00:08:56.167 "superblock": true, 00:08:56.167 "num_base_bdevs": 3, 00:08:56.167 "num_base_bdevs_discovered": 1, 00:08:56.167 "num_base_bdevs_operational": 3, 00:08:56.167 "base_bdevs_list": [ 00:08:56.167 { 00:08:56.167 "name": "BaseBdev1", 00:08:56.167 "uuid": "6917388f-b5e1-488c-9bf1-cfa5c317a254", 00:08:56.167 "is_configured": true, 00:08:56.167 "data_offset": 2048, 00:08:56.167 "data_size": 63488 00:08:56.167 }, 00:08:56.167 { 00:08:56.167 "name": "BaseBdev2", 00:08:56.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.167 "is_configured": false, 00:08:56.167 "data_offset": 0, 00:08:56.167 "data_size": 0 00:08:56.167 }, 00:08:56.167 { 00:08:56.167 "name": "BaseBdev3", 00:08:56.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.167 "is_configured": false, 00:08:56.167 "data_offset": 0, 00:08:56.167 "data_size": 0 00:08:56.167 } 00:08:56.167 ] 00:08:56.167 }' 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.167 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.738 [2024-11-26 21:16:14.662557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.738 BaseBdev2 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.738 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.738 [ 00:08:56.738 { 00:08:56.738 "name": "BaseBdev2", 00:08:56.738 "aliases": [ 00:08:56.738 "c0687355-ae2c-4368-b584-04d19dfe43f0" 00:08:56.738 ], 00:08:56.738 "product_name": "Malloc disk", 00:08:56.738 "block_size": 512, 00:08:56.738 "num_blocks": 65536, 00:08:56.738 "uuid": "c0687355-ae2c-4368-b584-04d19dfe43f0", 00:08:56.738 "assigned_rate_limits": { 00:08:56.738 "rw_ios_per_sec": 0, 00:08:56.738 "rw_mbytes_per_sec": 0, 00:08:56.738 "r_mbytes_per_sec": 0, 00:08:56.738 "w_mbytes_per_sec": 0 00:08:56.738 }, 00:08:56.738 "claimed": true, 00:08:56.738 "claim_type": "exclusive_write", 00:08:56.738 "zoned": false, 00:08:56.738 "supported_io_types": { 00:08:56.738 "read": true, 00:08:56.738 "write": true, 00:08:56.738 "unmap": true, 00:08:56.738 "flush": true, 00:08:56.738 "reset": true, 00:08:56.738 "nvme_admin": false, 00:08:56.738 "nvme_io": false, 00:08:56.738 "nvme_io_md": false, 00:08:56.738 "write_zeroes": true, 00:08:56.738 "zcopy": true, 00:08:56.738 "get_zone_info": false, 00:08:56.739 "zone_management": false, 00:08:56.739 "zone_append": false, 00:08:56.739 "compare": false, 00:08:56.739 "compare_and_write": false, 00:08:56.739 "abort": true, 00:08:56.739 "seek_hole": false, 00:08:56.739 "seek_data": false, 00:08:56.739 "copy": true, 00:08:56.739 "nvme_iov_md": false 00:08:56.739 }, 00:08:56.739 "memory_domains": [ 00:08:56.739 { 00:08:56.739 "dma_device_id": "system", 00:08:56.739 "dma_device_type": 1 00:08:56.739 }, 00:08:56.739 { 00:08:56.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.739 "dma_device_type": 2 00:08:56.739 } 00:08:56.739 ], 00:08:56.739 "driver_specific": {} 00:08:56.739 } 00:08:56.739 ] 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.739 "name": "Existed_Raid", 00:08:56.739 "uuid": "6a535a5e-8ce5-44ee-96be-9d58053bd44a", 00:08:56.739 "strip_size_kb": 64, 00:08:56.739 "state": "configuring", 00:08:56.739 "raid_level": "concat", 00:08:56.739 "superblock": true, 00:08:56.739 "num_base_bdevs": 3, 00:08:56.739 "num_base_bdevs_discovered": 2, 00:08:56.739 "num_base_bdevs_operational": 3, 00:08:56.739 "base_bdevs_list": [ 00:08:56.739 { 00:08:56.739 "name": "BaseBdev1", 00:08:56.739 "uuid": "6917388f-b5e1-488c-9bf1-cfa5c317a254", 00:08:56.739 "is_configured": true, 00:08:56.739 "data_offset": 2048, 00:08:56.739 "data_size": 63488 00:08:56.739 }, 00:08:56.739 { 00:08:56.739 "name": "BaseBdev2", 00:08:56.739 "uuid": "c0687355-ae2c-4368-b584-04d19dfe43f0", 00:08:56.739 "is_configured": true, 00:08:56.739 "data_offset": 2048, 00:08:56.739 "data_size": 63488 00:08:56.739 }, 00:08:56.739 { 00:08:56.739 "name": "BaseBdev3", 00:08:56.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.739 "is_configured": false, 00:08:56.739 "data_offset": 0, 00:08:56.739 "data_size": 0 00:08:56.739 } 00:08:56.739 ] 00:08:56.739 }' 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.739 21:16:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.999 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.999 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.999 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.259 [2024-11-26 21:16:15.202421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.259 [2024-11-26 21:16:15.202829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.259 [2024-11-26 21:16:15.202859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.259 [2024-11-26 21:16:15.203153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:57.259 [2024-11-26 21:16:15.203331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.259 [2024-11-26 21:16:15.203342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:57.259 [2024-11-26 21:16:15.203484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.259 BaseBdev3 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.259 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.259 [ 00:08:57.259 { 00:08:57.259 "name": "BaseBdev3", 00:08:57.259 "aliases": [ 00:08:57.259 "8473dfb1-4630-445a-9f38-f8fcf52ce0a3" 00:08:57.259 ], 00:08:57.259 "product_name": "Malloc disk", 00:08:57.259 "block_size": 512, 00:08:57.259 "num_blocks": 65536, 00:08:57.259 "uuid": "8473dfb1-4630-445a-9f38-f8fcf52ce0a3", 00:08:57.259 "assigned_rate_limits": { 00:08:57.259 "rw_ios_per_sec": 0, 00:08:57.259 "rw_mbytes_per_sec": 0, 00:08:57.259 "r_mbytes_per_sec": 0, 00:08:57.259 "w_mbytes_per_sec": 0 00:08:57.259 }, 00:08:57.259 "claimed": true, 00:08:57.259 "claim_type": "exclusive_write", 00:08:57.259 "zoned": false, 00:08:57.259 "supported_io_types": { 00:08:57.259 "read": true, 00:08:57.259 "write": true, 00:08:57.259 "unmap": true, 00:08:57.259 "flush": true, 00:08:57.259 "reset": true, 00:08:57.259 "nvme_admin": false, 00:08:57.259 "nvme_io": false, 00:08:57.259 "nvme_io_md": false, 00:08:57.259 "write_zeroes": true, 00:08:57.259 "zcopy": true, 00:08:57.259 "get_zone_info": false, 00:08:57.259 "zone_management": false, 00:08:57.259 "zone_append": false, 00:08:57.259 "compare": false, 00:08:57.259 "compare_and_write": false, 00:08:57.259 "abort": true, 00:08:57.259 "seek_hole": false, 00:08:57.259 "seek_data": false, 00:08:57.259 "copy": true, 00:08:57.259 "nvme_iov_md": false 00:08:57.259 }, 00:08:57.259 "memory_domains": [ 00:08:57.259 { 00:08:57.259 "dma_device_id": "system", 00:08:57.259 "dma_device_type": 1 00:08:57.259 }, 00:08:57.259 { 00:08:57.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.259 "dma_device_type": 2 00:08:57.259 } 00:08:57.260 ], 00:08:57.260 "driver_specific": {} 00:08:57.260 } 00:08:57.260 ] 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.260 "name": "Existed_Raid", 00:08:57.260 "uuid": "6a535a5e-8ce5-44ee-96be-9d58053bd44a", 00:08:57.260 "strip_size_kb": 64, 00:08:57.260 "state": "online", 00:08:57.260 "raid_level": "concat", 00:08:57.260 "superblock": true, 00:08:57.260 "num_base_bdevs": 3, 00:08:57.260 "num_base_bdevs_discovered": 3, 00:08:57.260 "num_base_bdevs_operational": 3, 00:08:57.260 "base_bdevs_list": [ 00:08:57.260 { 00:08:57.260 "name": "BaseBdev1", 00:08:57.260 "uuid": "6917388f-b5e1-488c-9bf1-cfa5c317a254", 00:08:57.260 "is_configured": true, 00:08:57.260 "data_offset": 2048, 00:08:57.260 "data_size": 63488 00:08:57.260 }, 00:08:57.260 { 00:08:57.260 "name": "BaseBdev2", 00:08:57.260 "uuid": "c0687355-ae2c-4368-b584-04d19dfe43f0", 00:08:57.260 "is_configured": true, 00:08:57.260 "data_offset": 2048, 00:08:57.260 "data_size": 63488 00:08:57.260 }, 00:08:57.260 { 00:08:57.260 "name": "BaseBdev3", 00:08:57.260 "uuid": "8473dfb1-4630-445a-9f38-f8fcf52ce0a3", 00:08:57.260 "is_configured": true, 00:08:57.260 "data_offset": 2048, 00:08:57.260 "data_size": 63488 00:08:57.260 } 00:08:57.260 ] 00:08:57.260 }' 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.260 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.529 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.530 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.530 [2024-11-26 21:16:15.677988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.802 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.802 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.802 "name": "Existed_Raid", 00:08:57.802 "aliases": [ 00:08:57.802 "6a535a5e-8ce5-44ee-96be-9d58053bd44a" 00:08:57.802 ], 00:08:57.802 "product_name": "Raid Volume", 00:08:57.802 "block_size": 512, 00:08:57.802 "num_blocks": 190464, 00:08:57.802 "uuid": "6a535a5e-8ce5-44ee-96be-9d58053bd44a", 00:08:57.802 "assigned_rate_limits": { 00:08:57.802 "rw_ios_per_sec": 0, 00:08:57.802 "rw_mbytes_per_sec": 0, 00:08:57.802 "r_mbytes_per_sec": 0, 00:08:57.803 "w_mbytes_per_sec": 0 00:08:57.803 }, 00:08:57.803 "claimed": false, 00:08:57.803 "zoned": false, 00:08:57.803 "supported_io_types": { 00:08:57.803 "read": true, 00:08:57.803 "write": true, 00:08:57.803 "unmap": true, 00:08:57.803 "flush": true, 00:08:57.803 "reset": true, 00:08:57.803 "nvme_admin": false, 00:08:57.803 "nvme_io": false, 00:08:57.803 "nvme_io_md": false, 00:08:57.803 "write_zeroes": true, 00:08:57.803 "zcopy": false, 00:08:57.803 "get_zone_info": false, 00:08:57.803 "zone_management": false, 00:08:57.803 "zone_append": false, 00:08:57.803 "compare": false, 00:08:57.803 "compare_and_write": false, 00:08:57.803 "abort": false, 00:08:57.803 "seek_hole": false, 00:08:57.803 "seek_data": false, 00:08:57.803 "copy": false, 00:08:57.803 "nvme_iov_md": false 00:08:57.803 }, 00:08:57.803 "memory_domains": [ 00:08:57.803 { 00:08:57.803 "dma_device_id": "system", 00:08:57.803 "dma_device_type": 1 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.803 "dma_device_type": 2 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "dma_device_id": "system", 00:08:57.803 "dma_device_type": 1 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.803 "dma_device_type": 2 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "dma_device_id": "system", 00:08:57.803 "dma_device_type": 1 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.803 "dma_device_type": 2 00:08:57.803 } 00:08:57.803 ], 00:08:57.803 "driver_specific": { 00:08:57.803 "raid": { 00:08:57.803 "uuid": "6a535a5e-8ce5-44ee-96be-9d58053bd44a", 00:08:57.803 "strip_size_kb": 64, 00:08:57.803 "state": "online", 00:08:57.803 "raid_level": "concat", 00:08:57.803 "superblock": true, 00:08:57.803 "num_base_bdevs": 3, 00:08:57.803 "num_base_bdevs_discovered": 3, 00:08:57.803 "num_base_bdevs_operational": 3, 00:08:57.803 "base_bdevs_list": [ 00:08:57.803 { 00:08:57.803 "name": "BaseBdev1", 00:08:57.803 "uuid": "6917388f-b5e1-488c-9bf1-cfa5c317a254", 00:08:57.803 "is_configured": true, 00:08:57.803 "data_offset": 2048, 00:08:57.803 "data_size": 63488 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "name": "BaseBdev2", 00:08:57.803 "uuid": "c0687355-ae2c-4368-b584-04d19dfe43f0", 00:08:57.803 "is_configured": true, 00:08:57.803 "data_offset": 2048, 00:08:57.803 "data_size": 63488 00:08:57.803 }, 00:08:57.803 { 00:08:57.803 "name": "BaseBdev3", 00:08:57.803 "uuid": "8473dfb1-4630-445a-9f38-f8fcf52ce0a3", 00:08:57.803 "is_configured": true, 00:08:57.803 "data_offset": 2048, 00:08:57.803 "data_size": 63488 00:08:57.803 } 00:08:57.803 ] 00:08:57.803 } 00:08:57.803 } 00:08:57.803 }' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.803 BaseBdev2 00:08:57.803 BaseBdev3' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.803 21:16:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.803 [2024-11-26 21:16:15.929262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.803 [2024-11-26 21:16:15.929291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.803 [2024-11-26 21:16:15.929344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.063 "name": "Existed_Raid", 00:08:58.063 "uuid": "6a535a5e-8ce5-44ee-96be-9d58053bd44a", 00:08:58.063 "strip_size_kb": 64, 00:08:58.063 "state": "offline", 00:08:58.063 "raid_level": "concat", 00:08:58.063 "superblock": true, 00:08:58.063 "num_base_bdevs": 3, 00:08:58.063 "num_base_bdevs_discovered": 2, 00:08:58.063 "num_base_bdevs_operational": 2, 00:08:58.063 "base_bdevs_list": [ 00:08:58.063 { 00:08:58.063 "name": null, 00:08:58.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.063 "is_configured": false, 00:08:58.063 "data_offset": 0, 00:08:58.063 "data_size": 63488 00:08:58.063 }, 00:08:58.063 { 00:08:58.063 "name": "BaseBdev2", 00:08:58.063 "uuid": "c0687355-ae2c-4368-b584-04d19dfe43f0", 00:08:58.063 "is_configured": true, 00:08:58.063 "data_offset": 2048, 00:08:58.063 "data_size": 63488 00:08:58.063 }, 00:08:58.063 { 00:08:58.063 "name": "BaseBdev3", 00:08:58.063 "uuid": "8473dfb1-4630-445a-9f38-f8fcf52ce0a3", 00:08:58.063 "is_configured": true, 00:08:58.063 "data_offset": 2048, 00:08:58.063 "data_size": 63488 00:08:58.063 } 00:08:58.063 ] 00:08:58.063 }' 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.063 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.633 [2024-11-26 21:16:16.545931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.633 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.633 [2024-11-26 21:16:16.700145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.633 [2024-11-26 21:16:16.700243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.893 BaseBdev2 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.893 [ 00:08:58.893 { 00:08:58.893 "name": "BaseBdev2", 00:08:58.893 "aliases": [ 00:08:58.893 "897ede7d-ae69-4808-b99d-639b34e161cc" 00:08:58.893 ], 00:08:58.893 "product_name": "Malloc disk", 00:08:58.893 "block_size": 512, 00:08:58.893 "num_blocks": 65536, 00:08:58.893 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:08:58.893 "assigned_rate_limits": { 00:08:58.893 "rw_ios_per_sec": 0, 00:08:58.893 "rw_mbytes_per_sec": 0, 00:08:58.893 "r_mbytes_per_sec": 0, 00:08:58.893 "w_mbytes_per_sec": 0 00:08:58.893 }, 00:08:58.893 "claimed": false, 00:08:58.893 "zoned": false, 00:08:58.893 "supported_io_types": { 00:08:58.893 "read": true, 00:08:58.893 "write": true, 00:08:58.893 "unmap": true, 00:08:58.893 "flush": true, 00:08:58.893 "reset": true, 00:08:58.893 "nvme_admin": false, 00:08:58.893 "nvme_io": false, 00:08:58.893 "nvme_io_md": false, 00:08:58.893 "write_zeroes": true, 00:08:58.893 "zcopy": true, 00:08:58.893 "get_zone_info": false, 00:08:58.893 "zone_management": false, 00:08:58.893 "zone_append": false, 00:08:58.893 "compare": false, 00:08:58.893 "compare_and_write": false, 00:08:58.893 "abort": true, 00:08:58.893 "seek_hole": false, 00:08:58.893 "seek_data": false, 00:08:58.893 "copy": true, 00:08:58.893 "nvme_iov_md": false 00:08:58.893 }, 00:08:58.893 "memory_domains": [ 00:08:58.893 { 00:08:58.893 "dma_device_id": "system", 00:08:58.893 "dma_device_type": 1 00:08:58.893 }, 00:08:58.893 { 00:08:58.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.893 "dma_device_type": 2 00:08:58.893 } 00:08:58.893 ], 00:08:58.893 "driver_specific": {} 00:08:58.893 } 00:08:58.893 ] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.893 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.894 BaseBdev3 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.894 21:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.894 [ 00:08:58.894 { 00:08:58.894 "name": "BaseBdev3", 00:08:58.894 "aliases": [ 00:08:58.894 "7548090e-6056-42a3-b5ab-e7a36191e087" 00:08:58.894 ], 00:08:58.894 "product_name": "Malloc disk", 00:08:58.894 "block_size": 512, 00:08:58.894 "num_blocks": 65536, 00:08:58.894 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:08:58.894 "assigned_rate_limits": { 00:08:58.894 "rw_ios_per_sec": 0, 00:08:58.894 "rw_mbytes_per_sec": 0, 00:08:58.894 "r_mbytes_per_sec": 0, 00:08:58.894 "w_mbytes_per_sec": 0 00:08:58.894 }, 00:08:58.894 "claimed": false, 00:08:58.894 "zoned": false, 00:08:58.894 "supported_io_types": { 00:08:58.894 "read": true, 00:08:58.894 "write": true, 00:08:58.894 "unmap": true, 00:08:58.894 "flush": true, 00:08:58.894 "reset": true, 00:08:58.894 "nvme_admin": false, 00:08:58.894 "nvme_io": false, 00:08:58.894 "nvme_io_md": false, 00:08:58.894 "write_zeroes": true, 00:08:58.894 "zcopy": true, 00:08:58.894 "get_zone_info": false, 00:08:58.894 "zone_management": false, 00:08:58.894 "zone_append": false, 00:08:58.894 "compare": false, 00:08:58.894 "compare_and_write": false, 00:08:58.894 "abort": true, 00:08:58.894 "seek_hole": false, 00:08:58.894 "seek_data": false, 00:08:58.894 "copy": true, 00:08:58.894 "nvme_iov_md": false 00:08:58.894 }, 00:08:58.894 "memory_domains": [ 00:08:58.894 { 00:08:58.894 "dma_device_id": "system", 00:08:58.894 "dma_device_type": 1 00:08:58.894 }, 00:08:58.894 { 00:08:58.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.894 "dma_device_type": 2 00:08:58.894 } 00:08:58.894 ], 00:08:58.894 "driver_specific": {} 00:08:58.894 } 00:08:58.894 ] 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.894 [2024-11-26 21:16:17.008738] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.894 [2024-11-26 21:16:17.008844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.894 [2024-11-26 21:16:17.008897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.894 [2024-11-26 21:16:17.010738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.894 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.154 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.154 "name": "Existed_Raid", 00:08:59.154 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:08:59.154 "strip_size_kb": 64, 00:08:59.154 "state": "configuring", 00:08:59.154 "raid_level": "concat", 00:08:59.154 "superblock": true, 00:08:59.154 "num_base_bdevs": 3, 00:08:59.154 "num_base_bdevs_discovered": 2, 00:08:59.154 "num_base_bdevs_operational": 3, 00:08:59.154 "base_bdevs_list": [ 00:08:59.154 { 00:08:59.154 "name": "BaseBdev1", 00:08:59.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.154 "is_configured": false, 00:08:59.154 "data_offset": 0, 00:08:59.154 "data_size": 0 00:08:59.154 }, 00:08:59.154 { 00:08:59.154 "name": "BaseBdev2", 00:08:59.154 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:08:59.154 "is_configured": true, 00:08:59.154 "data_offset": 2048, 00:08:59.154 "data_size": 63488 00:08:59.154 }, 00:08:59.154 { 00:08:59.154 "name": "BaseBdev3", 00:08:59.154 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:08:59.154 "is_configured": true, 00:08:59.154 "data_offset": 2048, 00:08:59.154 "data_size": 63488 00:08:59.154 } 00:08:59.154 ] 00:08:59.154 }' 00:08:59.154 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.154 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.414 [2024-11-26 21:16:17.443984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.414 "name": "Existed_Raid", 00:08:59.414 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:08:59.414 "strip_size_kb": 64, 00:08:59.414 "state": "configuring", 00:08:59.414 "raid_level": "concat", 00:08:59.414 "superblock": true, 00:08:59.414 "num_base_bdevs": 3, 00:08:59.414 "num_base_bdevs_discovered": 1, 00:08:59.414 "num_base_bdevs_operational": 3, 00:08:59.414 "base_bdevs_list": [ 00:08:59.414 { 00:08:59.414 "name": "BaseBdev1", 00:08:59.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.414 "is_configured": false, 00:08:59.414 "data_offset": 0, 00:08:59.414 "data_size": 0 00:08:59.414 }, 00:08:59.414 { 00:08:59.414 "name": null, 00:08:59.414 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:08:59.414 "is_configured": false, 00:08:59.414 "data_offset": 0, 00:08:59.414 "data_size": 63488 00:08:59.414 }, 00:08:59.414 { 00:08:59.414 "name": "BaseBdev3", 00:08:59.414 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:08:59.414 "is_configured": true, 00:08:59.414 "data_offset": 2048, 00:08:59.414 "data_size": 63488 00:08:59.414 } 00:08:59.414 ] 00:08:59.414 }' 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.414 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.984 [2024-11-26 21:16:17.931271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.984 BaseBdev1 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.984 [ 00:08:59.984 { 00:08:59.984 "name": "BaseBdev1", 00:08:59.984 "aliases": [ 00:08:59.984 "ccb05399-c416-4201-acee-d670d85bf982" 00:08:59.984 ], 00:08:59.984 "product_name": "Malloc disk", 00:08:59.984 "block_size": 512, 00:08:59.984 "num_blocks": 65536, 00:08:59.984 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:08:59.984 "assigned_rate_limits": { 00:08:59.984 "rw_ios_per_sec": 0, 00:08:59.984 "rw_mbytes_per_sec": 0, 00:08:59.984 "r_mbytes_per_sec": 0, 00:08:59.984 "w_mbytes_per_sec": 0 00:08:59.984 }, 00:08:59.984 "claimed": true, 00:08:59.984 "claim_type": "exclusive_write", 00:08:59.984 "zoned": false, 00:08:59.984 "supported_io_types": { 00:08:59.984 "read": true, 00:08:59.984 "write": true, 00:08:59.984 "unmap": true, 00:08:59.984 "flush": true, 00:08:59.984 "reset": true, 00:08:59.984 "nvme_admin": false, 00:08:59.984 "nvme_io": false, 00:08:59.984 "nvme_io_md": false, 00:08:59.984 "write_zeroes": true, 00:08:59.984 "zcopy": true, 00:08:59.984 "get_zone_info": false, 00:08:59.984 "zone_management": false, 00:08:59.984 "zone_append": false, 00:08:59.984 "compare": false, 00:08:59.984 "compare_and_write": false, 00:08:59.984 "abort": true, 00:08:59.984 "seek_hole": false, 00:08:59.984 "seek_data": false, 00:08:59.984 "copy": true, 00:08:59.984 "nvme_iov_md": false 00:08:59.984 }, 00:08:59.984 "memory_domains": [ 00:08:59.984 { 00:08:59.984 "dma_device_id": "system", 00:08:59.984 "dma_device_type": 1 00:08:59.984 }, 00:08:59.984 { 00:08:59.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.984 "dma_device_type": 2 00:08:59.984 } 00:08:59.984 ], 00:08:59.984 "driver_specific": {} 00:08:59.984 } 00:08:59.984 ] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.984 21:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.984 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.984 "name": "Existed_Raid", 00:08:59.984 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:08:59.984 "strip_size_kb": 64, 00:08:59.984 "state": "configuring", 00:08:59.984 "raid_level": "concat", 00:08:59.984 "superblock": true, 00:08:59.984 "num_base_bdevs": 3, 00:08:59.984 "num_base_bdevs_discovered": 2, 00:08:59.984 "num_base_bdevs_operational": 3, 00:08:59.984 "base_bdevs_list": [ 00:08:59.984 { 00:08:59.984 "name": "BaseBdev1", 00:08:59.984 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:08:59.984 "is_configured": true, 00:08:59.984 "data_offset": 2048, 00:08:59.984 "data_size": 63488 00:08:59.984 }, 00:08:59.984 { 00:08:59.984 "name": null, 00:08:59.984 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:08:59.984 "is_configured": false, 00:08:59.984 "data_offset": 0, 00:08:59.984 "data_size": 63488 00:08:59.984 }, 00:08:59.984 { 00:08:59.984 "name": "BaseBdev3", 00:08:59.984 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:08:59.984 "is_configured": true, 00:08:59.984 "data_offset": 2048, 00:08:59.984 "data_size": 63488 00:08:59.984 } 00:08:59.984 ] 00:08:59.984 }' 00:08:59.984 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.984 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.554 [2024-11-26 21:16:18.486379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.554 "name": "Existed_Raid", 00:09:00.554 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:00.554 "strip_size_kb": 64, 00:09:00.554 "state": "configuring", 00:09:00.554 "raid_level": "concat", 00:09:00.554 "superblock": true, 00:09:00.554 "num_base_bdevs": 3, 00:09:00.554 "num_base_bdevs_discovered": 1, 00:09:00.554 "num_base_bdevs_operational": 3, 00:09:00.554 "base_bdevs_list": [ 00:09:00.554 { 00:09:00.554 "name": "BaseBdev1", 00:09:00.554 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:00.554 "is_configured": true, 00:09:00.554 "data_offset": 2048, 00:09:00.554 "data_size": 63488 00:09:00.554 }, 00:09:00.554 { 00:09:00.554 "name": null, 00:09:00.554 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:09:00.554 "is_configured": false, 00:09:00.554 "data_offset": 0, 00:09:00.554 "data_size": 63488 00:09:00.554 }, 00:09:00.554 { 00:09:00.554 "name": null, 00:09:00.554 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:09:00.554 "is_configured": false, 00:09:00.554 "data_offset": 0, 00:09:00.554 "data_size": 63488 00:09:00.554 } 00:09:00.554 ] 00:09:00.554 }' 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.554 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.815 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.074 [2024-11-26 21:16:18.969574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.074 21:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.074 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.074 "name": "Existed_Raid", 00:09:01.074 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:01.074 "strip_size_kb": 64, 00:09:01.074 "state": "configuring", 00:09:01.074 "raid_level": "concat", 00:09:01.074 "superblock": true, 00:09:01.074 "num_base_bdevs": 3, 00:09:01.074 "num_base_bdevs_discovered": 2, 00:09:01.074 "num_base_bdevs_operational": 3, 00:09:01.074 "base_bdevs_list": [ 00:09:01.074 { 00:09:01.074 "name": "BaseBdev1", 00:09:01.074 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:01.074 "is_configured": true, 00:09:01.074 "data_offset": 2048, 00:09:01.074 "data_size": 63488 00:09:01.074 }, 00:09:01.074 { 00:09:01.074 "name": null, 00:09:01.074 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:09:01.074 "is_configured": false, 00:09:01.074 "data_offset": 0, 00:09:01.074 "data_size": 63488 00:09:01.074 }, 00:09:01.074 { 00:09:01.074 "name": "BaseBdev3", 00:09:01.074 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:09:01.074 "is_configured": true, 00:09:01.074 "data_offset": 2048, 00:09:01.074 "data_size": 63488 00:09:01.074 } 00:09:01.074 ] 00:09:01.074 }' 00:09:01.074 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.074 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.333 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.333 [2024-11-26 21:16:19.452786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.592 "name": "Existed_Raid", 00:09:01.592 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:01.592 "strip_size_kb": 64, 00:09:01.592 "state": "configuring", 00:09:01.592 "raid_level": "concat", 00:09:01.592 "superblock": true, 00:09:01.592 "num_base_bdevs": 3, 00:09:01.592 "num_base_bdevs_discovered": 1, 00:09:01.592 "num_base_bdevs_operational": 3, 00:09:01.592 "base_bdevs_list": [ 00:09:01.592 { 00:09:01.592 "name": null, 00:09:01.592 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:01.592 "is_configured": false, 00:09:01.592 "data_offset": 0, 00:09:01.592 "data_size": 63488 00:09:01.592 }, 00:09:01.592 { 00:09:01.592 "name": null, 00:09:01.592 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:09:01.592 "is_configured": false, 00:09:01.592 "data_offset": 0, 00:09:01.592 "data_size": 63488 00:09:01.592 }, 00:09:01.592 { 00:09:01.592 "name": "BaseBdev3", 00:09:01.592 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:09:01.592 "is_configured": true, 00:09:01.592 "data_offset": 2048, 00:09:01.592 "data_size": 63488 00:09:01.592 } 00:09:01.592 ] 00:09:01.592 }' 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.592 21:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.161 [2024-11-26 21:16:20.044106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.161 "name": "Existed_Raid", 00:09:02.161 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:02.161 "strip_size_kb": 64, 00:09:02.161 "state": "configuring", 00:09:02.161 "raid_level": "concat", 00:09:02.161 "superblock": true, 00:09:02.161 "num_base_bdevs": 3, 00:09:02.161 "num_base_bdevs_discovered": 2, 00:09:02.161 "num_base_bdevs_operational": 3, 00:09:02.161 "base_bdevs_list": [ 00:09:02.161 { 00:09:02.161 "name": null, 00:09:02.161 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:02.161 "is_configured": false, 00:09:02.161 "data_offset": 0, 00:09:02.161 "data_size": 63488 00:09:02.161 }, 00:09:02.161 { 00:09:02.161 "name": "BaseBdev2", 00:09:02.161 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:09:02.161 "is_configured": true, 00:09:02.161 "data_offset": 2048, 00:09:02.161 "data_size": 63488 00:09:02.161 }, 00:09:02.161 { 00:09:02.161 "name": "BaseBdev3", 00:09:02.161 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:09:02.161 "is_configured": true, 00:09:02.161 "data_offset": 2048, 00:09:02.161 "data_size": 63488 00:09:02.161 } 00:09:02.161 ] 00:09:02.161 }' 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.161 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.421 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.421 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.421 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ccb05399-c416-4201-acee-d670d85bf982 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.422 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.682 [2024-11-26 21:16:20.586785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:02.682 [2024-11-26 21:16:20.587053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:02.682 [2024-11-26 21:16:20.587072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.682 [2024-11-26 21:16:20.587333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.682 NewBaseBdev 00:09:02.682 [2024-11-26 21:16:20.587488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:02.682 [2024-11-26 21:16:20.587504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:02.682 [2024-11-26 21:16:20.587663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.683 [ 00:09:02.683 { 00:09:02.683 "name": "NewBaseBdev", 00:09:02.683 "aliases": [ 00:09:02.683 "ccb05399-c416-4201-acee-d670d85bf982" 00:09:02.683 ], 00:09:02.683 "product_name": "Malloc disk", 00:09:02.683 "block_size": 512, 00:09:02.683 "num_blocks": 65536, 00:09:02.683 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:02.683 "assigned_rate_limits": { 00:09:02.683 "rw_ios_per_sec": 0, 00:09:02.683 "rw_mbytes_per_sec": 0, 00:09:02.683 "r_mbytes_per_sec": 0, 00:09:02.683 "w_mbytes_per_sec": 0 00:09:02.683 }, 00:09:02.683 "claimed": true, 00:09:02.683 "claim_type": "exclusive_write", 00:09:02.683 "zoned": false, 00:09:02.683 "supported_io_types": { 00:09:02.683 "read": true, 00:09:02.683 "write": true, 00:09:02.683 "unmap": true, 00:09:02.683 "flush": true, 00:09:02.683 "reset": true, 00:09:02.683 "nvme_admin": false, 00:09:02.683 "nvme_io": false, 00:09:02.683 "nvme_io_md": false, 00:09:02.683 "write_zeroes": true, 00:09:02.683 "zcopy": true, 00:09:02.683 "get_zone_info": false, 00:09:02.683 "zone_management": false, 00:09:02.683 "zone_append": false, 00:09:02.683 "compare": false, 00:09:02.683 "compare_and_write": false, 00:09:02.683 "abort": true, 00:09:02.683 "seek_hole": false, 00:09:02.683 "seek_data": false, 00:09:02.683 "copy": true, 00:09:02.683 "nvme_iov_md": false 00:09:02.683 }, 00:09:02.683 "memory_domains": [ 00:09:02.683 { 00:09:02.683 "dma_device_id": "system", 00:09:02.683 "dma_device_type": 1 00:09:02.683 }, 00:09:02.683 { 00:09:02.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.683 "dma_device_type": 2 00:09:02.683 } 00:09:02.683 ], 00:09:02.683 "driver_specific": {} 00:09:02.683 } 00:09:02.683 ] 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.683 "name": "Existed_Raid", 00:09:02.683 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:02.683 "strip_size_kb": 64, 00:09:02.683 "state": "online", 00:09:02.683 "raid_level": "concat", 00:09:02.683 "superblock": true, 00:09:02.683 "num_base_bdevs": 3, 00:09:02.683 "num_base_bdevs_discovered": 3, 00:09:02.683 "num_base_bdevs_operational": 3, 00:09:02.683 "base_bdevs_list": [ 00:09:02.683 { 00:09:02.683 "name": "NewBaseBdev", 00:09:02.683 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:02.683 "is_configured": true, 00:09:02.683 "data_offset": 2048, 00:09:02.683 "data_size": 63488 00:09:02.683 }, 00:09:02.683 { 00:09:02.683 "name": "BaseBdev2", 00:09:02.683 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:09:02.683 "is_configured": true, 00:09:02.683 "data_offset": 2048, 00:09:02.683 "data_size": 63488 00:09:02.683 }, 00:09:02.683 { 00:09:02.683 "name": "BaseBdev3", 00:09:02.683 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:09:02.683 "is_configured": true, 00:09:02.683 "data_offset": 2048, 00:09:02.683 "data_size": 63488 00:09:02.683 } 00:09:02.683 ] 00:09:02.683 }' 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.683 21:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.944 [2024-11-26 21:16:21.034385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.944 "name": "Existed_Raid", 00:09:02.944 "aliases": [ 00:09:02.944 "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0" 00:09:02.944 ], 00:09:02.944 "product_name": "Raid Volume", 00:09:02.944 "block_size": 512, 00:09:02.944 "num_blocks": 190464, 00:09:02.944 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:02.944 "assigned_rate_limits": { 00:09:02.944 "rw_ios_per_sec": 0, 00:09:02.944 "rw_mbytes_per_sec": 0, 00:09:02.944 "r_mbytes_per_sec": 0, 00:09:02.944 "w_mbytes_per_sec": 0 00:09:02.944 }, 00:09:02.944 "claimed": false, 00:09:02.944 "zoned": false, 00:09:02.944 "supported_io_types": { 00:09:02.944 "read": true, 00:09:02.944 "write": true, 00:09:02.944 "unmap": true, 00:09:02.944 "flush": true, 00:09:02.944 "reset": true, 00:09:02.944 "nvme_admin": false, 00:09:02.944 "nvme_io": false, 00:09:02.944 "nvme_io_md": false, 00:09:02.944 "write_zeroes": true, 00:09:02.944 "zcopy": false, 00:09:02.944 "get_zone_info": false, 00:09:02.944 "zone_management": false, 00:09:02.944 "zone_append": false, 00:09:02.944 "compare": false, 00:09:02.944 "compare_and_write": false, 00:09:02.944 "abort": false, 00:09:02.944 "seek_hole": false, 00:09:02.944 "seek_data": false, 00:09:02.944 "copy": false, 00:09:02.944 "nvme_iov_md": false 00:09:02.944 }, 00:09:02.944 "memory_domains": [ 00:09:02.944 { 00:09:02.944 "dma_device_id": "system", 00:09:02.944 "dma_device_type": 1 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.944 "dma_device_type": 2 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "dma_device_id": "system", 00:09:02.944 "dma_device_type": 1 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.944 "dma_device_type": 2 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "dma_device_id": "system", 00:09:02.944 "dma_device_type": 1 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.944 "dma_device_type": 2 00:09:02.944 } 00:09:02.944 ], 00:09:02.944 "driver_specific": { 00:09:02.944 "raid": { 00:09:02.944 "uuid": "1c78042d-fca4-43f3-bd55-cb0d4e7d6ba0", 00:09:02.944 "strip_size_kb": 64, 00:09:02.944 "state": "online", 00:09:02.944 "raid_level": "concat", 00:09:02.944 "superblock": true, 00:09:02.944 "num_base_bdevs": 3, 00:09:02.944 "num_base_bdevs_discovered": 3, 00:09:02.944 "num_base_bdevs_operational": 3, 00:09:02.944 "base_bdevs_list": [ 00:09:02.944 { 00:09:02.944 "name": "NewBaseBdev", 00:09:02.944 "uuid": "ccb05399-c416-4201-acee-d670d85bf982", 00:09:02.944 "is_configured": true, 00:09:02.944 "data_offset": 2048, 00:09:02.944 "data_size": 63488 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "name": "BaseBdev2", 00:09:02.944 "uuid": "897ede7d-ae69-4808-b99d-639b34e161cc", 00:09:02.944 "is_configured": true, 00:09:02.944 "data_offset": 2048, 00:09:02.944 "data_size": 63488 00:09:02.944 }, 00:09:02.944 { 00:09:02.944 "name": "BaseBdev3", 00:09:02.944 "uuid": "7548090e-6056-42a3-b5ab-e7a36191e087", 00:09:02.944 "is_configured": true, 00:09:02.944 "data_offset": 2048, 00:09:02.944 "data_size": 63488 00:09:02.944 } 00:09:02.944 ] 00:09:02.944 } 00:09:02.944 } 00:09:02.944 }' 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.944 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.944 BaseBdev2 00:09:02.944 BaseBdev3' 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.204 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.205 [2024-11-26 21:16:21.261705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.205 [2024-11-26 21:16:21.261736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.205 [2024-11-26 21:16:21.261822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.205 [2024-11-26 21:16:21.261874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.205 [2024-11-26 21:16:21.261886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66078 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66078 ']' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66078 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66078 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.205 killing process with pid 66078 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66078' 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66078 00:09:03.205 [2024-11-26 21:16:21.311479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.205 21:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66078 00:09:03.464 [2024-11-26 21:16:21.600800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.859 21:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.859 00:09:04.859 real 0m10.323s 00:09:04.859 user 0m16.371s 00:09:04.859 sys 0m1.851s 00:09:04.859 21:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.859 21:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.859 ************************************ 00:09:04.859 END TEST raid_state_function_test_sb 00:09:04.859 ************************************ 00:09:04.859 21:16:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:04.859 21:16:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:04.859 21:16:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.859 21:16:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.859 ************************************ 00:09:04.859 START TEST raid_superblock_test 00:09:04.859 ************************************ 00:09:04.859 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:04.859 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:04.859 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:04.859 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66694 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66694 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66694 ']' 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.860 21:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.860 [2024-11-26 21:16:22.819462] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:04.860 [2024-11-26 21:16:22.819592] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66694 ] 00:09:04.860 [2024-11-26 21:16:22.974715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.120 [2024-11-26 21:16:23.085931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.380 [2024-11-26 21:16:23.283365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.380 [2024-11-26 21:16:23.283399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.640 malloc1 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.640 [2024-11-26 21:16:23.694749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.640 [2024-11-26 21:16:23.694852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.640 [2024-11-26 21:16:23.694891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:05.640 [2024-11-26 21:16:23.694920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.640 [2024-11-26 21:16:23.697006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.640 [2024-11-26 21:16:23.697079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.640 pt1 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.640 malloc2 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.640 [2024-11-26 21:16:23.754738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.640 [2024-11-26 21:16:23.754796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.640 [2024-11-26 21:16:23.754822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:05.640 [2024-11-26 21:16:23.754830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.640 [2024-11-26 21:16:23.757065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.640 [2024-11-26 21:16:23.757101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.640 pt2 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.640 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.900 malloc3 00:09:05.900 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.900 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.900 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.900 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.900 [2024-11-26 21:16:23.825822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.900 [2024-11-26 21:16:23.825918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.901 [2024-11-26 21:16:23.825965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:05.901 [2024-11-26 21:16:23.826000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.901 [2024-11-26 21:16:23.827995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.901 [2024-11-26 21:16:23.828063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.901 pt3 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.901 [2024-11-26 21:16:23.837848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.901 [2024-11-26 21:16:23.839640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.901 [2024-11-26 21:16:23.839706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.901 [2024-11-26 21:16:23.839881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:05.901 [2024-11-26 21:16:23.839895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.901 [2024-11-26 21:16:23.840156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:05.901 [2024-11-26 21:16:23.840330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:05.901 [2024-11-26 21:16:23.840345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:05.901 [2024-11-26 21:16:23.840495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.901 "name": "raid_bdev1", 00:09:05.901 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:05.901 "strip_size_kb": 64, 00:09:05.901 "state": "online", 00:09:05.901 "raid_level": "concat", 00:09:05.901 "superblock": true, 00:09:05.901 "num_base_bdevs": 3, 00:09:05.901 "num_base_bdevs_discovered": 3, 00:09:05.901 "num_base_bdevs_operational": 3, 00:09:05.901 "base_bdevs_list": [ 00:09:05.901 { 00:09:05.901 "name": "pt1", 00:09:05.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.901 "is_configured": true, 00:09:05.901 "data_offset": 2048, 00:09:05.901 "data_size": 63488 00:09:05.901 }, 00:09:05.901 { 00:09:05.901 "name": "pt2", 00:09:05.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.901 "is_configured": true, 00:09:05.901 "data_offset": 2048, 00:09:05.901 "data_size": 63488 00:09:05.901 }, 00:09:05.901 { 00:09:05.901 "name": "pt3", 00:09:05.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.901 "is_configured": true, 00:09:05.901 "data_offset": 2048, 00:09:05.901 "data_size": 63488 00:09:05.901 } 00:09:05.901 ] 00:09:05.901 }' 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.901 21:16:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.161 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.420 [2024-11-26 21:16:24.317328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.420 "name": "raid_bdev1", 00:09:06.420 "aliases": [ 00:09:06.420 "a80c56e8-c020-40b7-a39a-d85e39b39b2a" 00:09:06.420 ], 00:09:06.420 "product_name": "Raid Volume", 00:09:06.420 "block_size": 512, 00:09:06.420 "num_blocks": 190464, 00:09:06.420 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:06.420 "assigned_rate_limits": { 00:09:06.420 "rw_ios_per_sec": 0, 00:09:06.420 "rw_mbytes_per_sec": 0, 00:09:06.420 "r_mbytes_per_sec": 0, 00:09:06.420 "w_mbytes_per_sec": 0 00:09:06.420 }, 00:09:06.420 "claimed": false, 00:09:06.420 "zoned": false, 00:09:06.420 "supported_io_types": { 00:09:06.420 "read": true, 00:09:06.420 "write": true, 00:09:06.420 "unmap": true, 00:09:06.420 "flush": true, 00:09:06.420 "reset": true, 00:09:06.420 "nvme_admin": false, 00:09:06.420 "nvme_io": false, 00:09:06.420 "nvme_io_md": false, 00:09:06.420 "write_zeroes": true, 00:09:06.420 "zcopy": false, 00:09:06.420 "get_zone_info": false, 00:09:06.420 "zone_management": false, 00:09:06.420 "zone_append": false, 00:09:06.420 "compare": false, 00:09:06.420 "compare_and_write": false, 00:09:06.420 "abort": false, 00:09:06.420 "seek_hole": false, 00:09:06.420 "seek_data": false, 00:09:06.420 "copy": false, 00:09:06.420 "nvme_iov_md": false 00:09:06.420 }, 00:09:06.420 "memory_domains": [ 00:09:06.420 { 00:09:06.420 "dma_device_id": "system", 00:09:06.420 "dma_device_type": 1 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.420 "dma_device_type": 2 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "dma_device_id": "system", 00:09:06.420 "dma_device_type": 1 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.420 "dma_device_type": 2 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "dma_device_id": "system", 00:09:06.420 "dma_device_type": 1 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.420 "dma_device_type": 2 00:09:06.420 } 00:09:06.420 ], 00:09:06.420 "driver_specific": { 00:09:06.420 "raid": { 00:09:06.420 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:06.420 "strip_size_kb": 64, 00:09:06.420 "state": "online", 00:09:06.420 "raid_level": "concat", 00:09:06.420 "superblock": true, 00:09:06.420 "num_base_bdevs": 3, 00:09:06.420 "num_base_bdevs_discovered": 3, 00:09:06.420 "num_base_bdevs_operational": 3, 00:09:06.420 "base_bdevs_list": [ 00:09:06.420 { 00:09:06.420 "name": "pt1", 00:09:06.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.420 "is_configured": true, 00:09:06.420 "data_offset": 2048, 00:09:06.420 "data_size": 63488 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "name": "pt2", 00:09:06.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.420 "is_configured": true, 00:09:06.420 "data_offset": 2048, 00:09:06.420 "data_size": 63488 00:09:06.420 }, 00:09:06.420 { 00:09:06.420 "name": "pt3", 00:09:06.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.420 "is_configured": true, 00:09:06.420 "data_offset": 2048, 00:09:06.420 "data_size": 63488 00:09:06.420 } 00:09:06.420 ] 00:09:06.420 } 00:09:06.420 } 00:09:06.420 }' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:06.420 pt2 00:09:06.420 pt3' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:06.420 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.421 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.680 [2024-11-26 21:16:24.576777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a80c56e8-c020-40b7-a39a-d85e39b39b2a 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a80c56e8-c020-40b7-a39a-d85e39b39b2a ']' 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.680 [2024-11-26 21:16:24.624431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.680 [2024-11-26 21:16:24.624457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.680 [2024-11-26 21:16:24.624527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.680 [2024-11-26 21:16:24.624587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.680 [2024-11-26 21:16:24.624595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:06.680 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 [2024-11-26 21:16:24.772218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:06.681 [2024-11-26 21:16:24.774092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:06.681 [2024-11-26 21:16:24.774141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:06.681 [2024-11-26 21:16:24.774189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:06.681 [2024-11-26 21:16:24.774245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:06.681 [2024-11-26 21:16:24.774265] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:06.681 [2024-11-26 21:16:24.774281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.681 [2024-11-26 21:16:24.774289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:06.681 request: 00:09:06.681 { 00:09:06.681 "name": "raid_bdev1", 00:09:06.681 "raid_level": "concat", 00:09:06.681 "base_bdevs": [ 00:09:06.681 "malloc1", 00:09:06.681 "malloc2", 00:09:06.681 "malloc3" 00:09:06.681 ], 00:09:06.681 "strip_size_kb": 64, 00:09:06.681 "superblock": false, 00:09:06.681 "method": "bdev_raid_create", 00:09:06.681 "req_id": 1 00:09:06.681 } 00:09:06.681 Got JSON-RPC error response 00:09:06.681 response: 00:09:06.681 { 00:09:06.681 "code": -17, 00:09:06.681 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:06.681 } 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.681 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.940 [2024-11-26 21:16:24.840073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:06.940 [2024-11-26 21:16:24.840164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.940 [2024-11-26 21:16:24.840201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:06.940 [2024-11-26 21:16:24.840232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.940 [2024-11-26 21:16:24.842466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.940 [2024-11-26 21:16:24.842551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:06.940 [2024-11-26 21:16:24.842672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:06.940 [2024-11-26 21:16:24.842763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:06.940 pt1 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.940 "name": "raid_bdev1", 00:09:06.940 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:06.940 "strip_size_kb": 64, 00:09:06.940 "state": "configuring", 00:09:06.940 "raid_level": "concat", 00:09:06.940 "superblock": true, 00:09:06.940 "num_base_bdevs": 3, 00:09:06.940 "num_base_bdevs_discovered": 1, 00:09:06.940 "num_base_bdevs_operational": 3, 00:09:06.940 "base_bdevs_list": [ 00:09:06.940 { 00:09:06.940 "name": "pt1", 00:09:06.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.940 "is_configured": true, 00:09:06.940 "data_offset": 2048, 00:09:06.940 "data_size": 63488 00:09:06.940 }, 00:09:06.940 { 00:09:06.940 "name": null, 00:09:06.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.940 "is_configured": false, 00:09:06.940 "data_offset": 2048, 00:09:06.940 "data_size": 63488 00:09:06.940 }, 00:09:06.940 { 00:09:06.940 "name": null, 00:09:06.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.940 "is_configured": false, 00:09:06.940 "data_offset": 2048, 00:09:06.940 "data_size": 63488 00:09:06.940 } 00:09:06.940 ] 00:09:06.940 }' 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.940 21:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 [2024-11-26 21:16:25.331275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.200 [2024-11-26 21:16:25.331353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.200 [2024-11-26 21:16:25.331379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:07.200 [2024-11-26 21:16:25.331388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.200 [2024-11-26 21:16:25.331825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.200 [2024-11-26 21:16:25.331848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.200 [2024-11-26 21:16:25.331929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.200 [2024-11-26 21:16:25.331959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.200 pt2 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.200 [2024-11-26 21:16:25.343256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.200 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.460 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.460 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.460 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.460 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.460 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.460 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.460 "name": "raid_bdev1", 00:09:07.460 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:07.460 "strip_size_kb": 64, 00:09:07.460 "state": "configuring", 00:09:07.460 "raid_level": "concat", 00:09:07.460 "superblock": true, 00:09:07.460 "num_base_bdevs": 3, 00:09:07.460 "num_base_bdevs_discovered": 1, 00:09:07.460 "num_base_bdevs_operational": 3, 00:09:07.460 "base_bdevs_list": [ 00:09:07.460 { 00:09:07.460 "name": "pt1", 00:09:07.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.460 "is_configured": true, 00:09:07.460 "data_offset": 2048, 00:09:07.460 "data_size": 63488 00:09:07.460 }, 00:09:07.460 { 00:09:07.460 "name": null, 00:09:07.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.460 "is_configured": false, 00:09:07.461 "data_offset": 0, 00:09:07.461 "data_size": 63488 00:09:07.461 }, 00:09:07.461 { 00:09:07.461 "name": null, 00:09:07.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.461 "is_configured": false, 00:09:07.461 "data_offset": 2048, 00:09:07.461 "data_size": 63488 00:09:07.461 } 00:09:07.461 ] 00:09:07.461 }' 00:09:07.461 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.461 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.721 [2024-11-26 21:16:25.734608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.721 [2024-11-26 21:16:25.734731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.721 [2024-11-26 21:16:25.734765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:07.721 [2024-11-26 21:16:25.734800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.721 [2024-11-26 21:16:25.735280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.721 [2024-11-26 21:16:25.735341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.721 [2024-11-26 21:16:25.735445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.721 [2024-11-26 21:16:25.735497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.721 pt2 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.721 [2024-11-26 21:16:25.746556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:07.721 [2024-11-26 21:16:25.746643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.721 [2024-11-26 21:16:25.746672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:07.721 [2024-11-26 21:16:25.746703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.721 [2024-11-26 21:16:25.747087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.721 [2024-11-26 21:16:25.747145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:07.721 [2024-11-26 21:16:25.747256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:07.721 [2024-11-26 21:16:25.747320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.721 [2024-11-26 21:16:25.747465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.721 [2024-11-26 21:16:25.747505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.721 [2024-11-26 21:16:25.747781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:07.721 [2024-11-26 21:16:25.747984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.721 [2024-11-26 21:16:25.748025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:07.721 [2024-11-26 21:16:25.748223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.721 pt3 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.721 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.722 "name": "raid_bdev1", 00:09:07.722 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:07.722 "strip_size_kb": 64, 00:09:07.722 "state": "online", 00:09:07.722 "raid_level": "concat", 00:09:07.722 "superblock": true, 00:09:07.722 "num_base_bdevs": 3, 00:09:07.722 "num_base_bdevs_discovered": 3, 00:09:07.722 "num_base_bdevs_operational": 3, 00:09:07.722 "base_bdevs_list": [ 00:09:07.722 { 00:09:07.722 "name": "pt1", 00:09:07.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.722 "is_configured": true, 00:09:07.722 "data_offset": 2048, 00:09:07.722 "data_size": 63488 00:09:07.722 }, 00:09:07.722 { 00:09:07.722 "name": "pt2", 00:09:07.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.722 "is_configured": true, 00:09:07.722 "data_offset": 2048, 00:09:07.722 "data_size": 63488 00:09:07.722 }, 00:09:07.722 { 00:09:07.722 "name": "pt3", 00:09:07.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.722 "is_configured": true, 00:09:07.722 "data_offset": 2048, 00:09:07.722 "data_size": 63488 00:09:07.722 } 00:09:07.722 ] 00:09:07.722 }' 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.722 21:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.291 [2024-11-26 21:16:26.186155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.291 "name": "raid_bdev1", 00:09:08.291 "aliases": [ 00:09:08.291 "a80c56e8-c020-40b7-a39a-d85e39b39b2a" 00:09:08.291 ], 00:09:08.291 "product_name": "Raid Volume", 00:09:08.291 "block_size": 512, 00:09:08.291 "num_blocks": 190464, 00:09:08.291 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:08.291 "assigned_rate_limits": { 00:09:08.291 "rw_ios_per_sec": 0, 00:09:08.291 "rw_mbytes_per_sec": 0, 00:09:08.291 "r_mbytes_per_sec": 0, 00:09:08.291 "w_mbytes_per_sec": 0 00:09:08.291 }, 00:09:08.291 "claimed": false, 00:09:08.291 "zoned": false, 00:09:08.291 "supported_io_types": { 00:09:08.291 "read": true, 00:09:08.291 "write": true, 00:09:08.291 "unmap": true, 00:09:08.291 "flush": true, 00:09:08.291 "reset": true, 00:09:08.291 "nvme_admin": false, 00:09:08.291 "nvme_io": false, 00:09:08.291 "nvme_io_md": false, 00:09:08.291 "write_zeroes": true, 00:09:08.291 "zcopy": false, 00:09:08.291 "get_zone_info": false, 00:09:08.291 "zone_management": false, 00:09:08.291 "zone_append": false, 00:09:08.291 "compare": false, 00:09:08.291 "compare_and_write": false, 00:09:08.291 "abort": false, 00:09:08.291 "seek_hole": false, 00:09:08.291 "seek_data": false, 00:09:08.291 "copy": false, 00:09:08.291 "nvme_iov_md": false 00:09:08.291 }, 00:09:08.291 "memory_domains": [ 00:09:08.291 { 00:09:08.291 "dma_device_id": "system", 00:09:08.291 "dma_device_type": 1 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.291 "dma_device_type": 2 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "dma_device_id": "system", 00:09:08.291 "dma_device_type": 1 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.291 "dma_device_type": 2 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "dma_device_id": "system", 00:09:08.291 "dma_device_type": 1 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.291 "dma_device_type": 2 00:09:08.291 } 00:09:08.291 ], 00:09:08.291 "driver_specific": { 00:09:08.291 "raid": { 00:09:08.291 "uuid": "a80c56e8-c020-40b7-a39a-d85e39b39b2a", 00:09:08.291 "strip_size_kb": 64, 00:09:08.291 "state": "online", 00:09:08.291 "raid_level": "concat", 00:09:08.291 "superblock": true, 00:09:08.291 "num_base_bdevs": 3, 00:09:08.291 "num_base_bdevs_discovered": 3, 00:09:08.291 "num_base_bdevs_operational": 3, 00:09:08.291 "base_bdevs_list": [ 00:09:08.291 { 00:09:08.291 "name": "pt1", 00:09:08.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.291 "is_configured": true, 00:09:08.291 "data_offset": 2048, 00:09:08.291 "data_size": 63488 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "name": "pt2", 00:09:08.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.291 "is_configured": true, 00:09:08.291 "data_offset": 2048, 00:09:08.291 "data_size": 63488 00:09:08.291 }, 00:09:08.291 { 00:09:08.291 "name": "pt3", 00:09:08.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.291 "is_configured": true, 00:09:08.291 "data_offset": 2048, 00:09:08.291 "data_size": 63488 00:09:08.291 } 00:09:08.291 ] 00:09:08.291 } 00:09:08.291 } 00:09:08.291 }' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:08.291 pt2 00:09:08.291 pt3' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.291 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.292 [2024-11-26 21:16:26.421677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.292 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a80c56e8-c020-40b7-a39a-d85e39b39b2a '!=' a80c56e8-c020-40b7-a39a-d85e39b39b2a ']' 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66694 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66694 ']' 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66694 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66694 00:09:08.552 killing process with pid 66694 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66694' 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66694 00:09:08.552 [2024-11-26 21:16:26.503124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.552 [2024-11-26 21:16:26.503206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.552 [2024-11-26 21:16:26.503265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.552 [2024-11-26 21:16:26.503276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:08.552 21:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66694 00:09:08.812 [2024-11-26 21:16:26.788928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.751 21:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:09.751 00:09:09.751 real 0m5.121s 00:09:09.751 user 0m7.373s 00:09:09.751 sys 0m0.882s 00:09:09.751 21:16:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.751 21:16:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.751 ************************************ 00:09:09.751 END TEST raid_superblock_test 00:09:09.751 ************************************ 00:09:10.011 21:16:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:10.012 21:16:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.012 21:16:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.012 21:16:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.012 ************************************ 00:09:10.012 START TEST raid_read_error_test 00:09:10.012 ************************************ 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6oiGjh5BhK 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66947 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66947 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66947 ']' 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.012 21:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.012 [2024-11-26 21:16:28.026183] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:10.012 [2024-11-26 21:16:28.026370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66947 ] 00:09:10.272 [2024-11-26 21:16:28.196396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.272 [2024-11-26 21:16:28.299438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.531 [2024-11-26 21:16:28.493504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.531 [2024-11-26 21:16:28.493616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.800 BaseBdev1_malloc 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.800 true 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.800 [2024-11-26 21:16:28.913704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.800 [2024-11-26 21:16:28.913760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.800 [2024-11-26 21:16:28.913781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.800 [2024-11-26 21:16:28.913792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.800 [2024-11-26 21:16:28.915866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.800 [2024-11-26 21:16:28.915906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.800 BaseBdev1 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.800 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.071 BaseBdev2_malloc 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.071 true 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.071 [2024-11-26 21:16:28.978998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:11.071 [2024-11-26 21:16:28.979090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.071 [2024-11-26 21:16:28.979109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:11.071 [2024-11-26 21:16:28.979120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.071 [2024-11-26 21:16:28.981220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.071 [2024-11-26 21:16:28.981260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:11.071 BaseBdev2 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.071 21:16:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:11.072 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.072 21:16:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 BaseBdev3_malloc 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 true 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 [2024-11-26 21:16:29.056144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:11.072 [2024-11-26 21:16:29.056238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.072 [2024-11-26 21:16:29.056258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:11.072 [2024-11-26 21:16:29.056269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.072 [2024-11-26 21:16:29.058352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.072 [2024-11-26 21:16:29.058391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:11.072 BaseBdev3 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 [2024-11-26 21:16:29.068200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.072 [2024-11-26 21:16:29.070003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.072 [2024-11-26 21:16:29.070075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.072 [2024-11-26 21:16:29.070276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:11.072 [2024-11-26 21:16:29.070289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.072 [2024-11-26 21:16:29.070530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:11.072 [2024-11-26 21:16:29.070694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:11.072 [2024-11-26 21:16:29.070707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:11.072 [2024-11-26 21:16:29.070854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.072 "name": "raid_bdev1", 00:09:11.072 "uuid": "4f09528b-1257-4af3-b652-acca81d19a6a", 00:09:11.072 "strip_size_kb": 64, 00:09:11.072 "state": "online", 00:09:11.072 "raid_level": "concat", 00:09:11.072 "superblock": true, 00:09:11.072 "num_base_bdevs": 3, 00:09:11.072 "num_base_bdevs_discovered": 3, 00:09:11.072 "num_base_bdevs_operational": 3, 00:09:11.072 "base_bdevs_list": [ 00:09:11.072 { 00:09:11.072 "name": "BaseBdev1", 00:09:11.072 "uuid": "765f8faf-1406-52a7-b5a1-9b2d967df690", 00:09:11.072 "is_configured": true, 00:09:11.072 "data_offset": 2048, 00:09:11.072 "data_size": 63488 00:09:11.072 }, 00:09:11.072 { 00:09:11.072 "name": "BaseBdev2", 00:09:11.072 "uuid": "f84075a4-e8a6-5cde-aac3-52e5cb2624e1", 00:09:11.072 "is_configured": true, 00:09:11.072 "data_offset": 2048, 00:09:11.072 "data_size": 63488 00:09:11.072 }, 00:09:11.072 { 00:09:11.072 "name": "BaseBdev3", 00:09:11.072 "uuid": "c6456e06-8cac-5aba-a3df-b688f586c4cf", 00:09:11.072 "is_configured": true, 00:09:11.072 "data_offset": 2048, 00:09:11.072 "data_size": 63488 00:09:11.072 } 00:09:11.072 ] 00:09:11.072 }' 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.072 21:16:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.640 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:11.640 21:16:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:11.640 [2024-11-26 21:16:29.604684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.579 "name": "raid_bdev1", 00:09:12.579 "uuid": "4f09528b-1257-4af3-b652-acca81d19a6a", 00:09:12.579 "strip_size_kb": 64, 00:09:12.579 "state": "online", 00:09:12.579 "raid_level": "concat", 00:09:12.579 "superblock": true, 00:09:12.579 "num_base_bdevs": 3, 00:09:12.579 "num_base_bdevs_discovered": 3, 00:09:12.579 "num_base_bdevs_operational": 3, 00:09:12.579 "base_bdevs_list": [ 00:09:12.579 { 00:09:12.579 "name": "BaseBdev1", 00:09:12.579 "uuid": "765f8faf-1406-52a7-b5a1-9b2d967df690", 00:09:12.579 "is_configured": true, 00:09:12.579 "data_offset": 2048, 00:09:12.579 "data_size": 63488 00:09:12.579 }, 00:09:12.579 { 00:09:12.579 "name": "BaseBdev2", 00:09:12.579 "uuid": "f84075a4-e8a6-5cde-aac3-52e5cb2624e1", 00:09:12.579 "is_configured": true, 00:09:12.579 "data_offset": 2048, 00:09:12.579 "data_size": 63488 00:09:12.579 }, 00:09:12.579 { 00:09:12.579 "name": "BaseBdev3", 00:09:12.579 "uuid": "c6456e06-8cac-5aba-a3df-b688f586c4cf", 00:09:12.579 "is_configured": true, 00:09:12.579 "data_offset": 2048, 00:09:12.579 "data_size": 63488 00:09:12.579 } 00:09:12.579 ] 00:09:12.579 }' 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.579 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.838 [2024-11-26 21:16:30.983159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.838 [2024-11-26 21:16:30.983190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.838 [2024-11-26 21:16:30.985935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.838 [2024-11-26 21:16:30.986020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.838 [2024-11-26 21:16:30.986091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.838 [2024-11-26 21:16:30.986135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:12.838 { 00:09:12.838 "results": [ 00:09:12.838 { 00:09:12.838 "job": "raid_bdev1", 00:09:12.838 "core_mask": "0x1", 00:09:12.838 "workload": "randrw", 00:09:12.838 "percentage": 50, 00:09:12.838 "status": "finished", 00:09:12.838 "queue_depth": 1, 00:09:12.838 "io_size": 131072, 00:09:12.838 "runtime": 1.379319, 00:09:12.838 "iops": 16352.272389490756, 00:09:12.838 "mibps": 2044.0340486863445, 00:09:12.838 "io_failed": 1, 00:09:12.838 "io_timeout": 0, 00:09:12.838 "avg_latency_us": 84.54380913956221, 00:09:12.838 "min_latency_us": 25.041048034934498, 00:09:12.838 "max_latency_us": 1452.380786026201 00:09:12.838 } 00:09:12.838 ], 00:09:12.838 "core_count": 1 00:09:12.838 } 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66947 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66947 ']' 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66947 00:09:12.838 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:13.098 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.098 21:16:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66947 00:09:13.098 21:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.098 21:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.098 21:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66947' 00:09:13.098 killing process with pid 66947 00:09:13.098 21:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66947 00:09:13.098 [2024-11-26 21:16:31.022639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.098 21:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66947 00:09:13.098 [2024-11-26 21:16:31.238927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6oiGjh5BhK 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:14.476 ************************************ 00:09:14.476 END TEST raid_read_error_test 00:09:14.476 ************************************ 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:14.476 00:09:14.476 real 0m4.454s 00:09:14.476 user 0m5.310s 00:09:14.476 sys 0m0.539s 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.476 21:16:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.476 21:16:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:14.476 21:16:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.476 21:16:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.476 21:16:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.476 ************************************ 00:09:14.476 START TEST raid_write_error_test 00:09:14.476 ************************************ 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.476 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xrsRYVetcv 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67087 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67087 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67087 ']' 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.477 21:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.477 [2024-11-26 21:16:32.551096] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:14.477 [2024-11-26 21:16:32.551316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67087 ] 00:09:14.736 [2024-11-26 21:16:32.725694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.736 [2024-11-26 21:16:32.834197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.995 [2024-11-26 21:16:33.028759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.995 [2024-11-26 21:16:33.028850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.254 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.254 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.254 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.254 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:15.254 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.254 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 BaseBdev1_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 true 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 [2024-11-26 21:16:33.427900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:15.514 [2024-11-26 21:16:33.427965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.514 [2024-11-26 21:16:33.427985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:15.514 [2024-11-26 21:16:33.427996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.514 [2024-11-26 21:16:33.430040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.514 [2024-11-26 21:16:33.430077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:15.514 BaseBdev1 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 BaseBdev2_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 true 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 [2024-11-26 21:16:33.495624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.514 [2024-11-26 21:16:33.495676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.514 [2024-11-26 21:16:33.495691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.514 [2024-11-26 21:16:33.495701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.514 [2024-11-26 21:16:33.497650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.514 [2024-11-26 21:16:33.497753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.514 BaseBdev2 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 BaseBdev3_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 true 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.514 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.514 [2024-11-26 21:16:33.571446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:15.514 [2024-11-26 21:16:33.571496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.515 [2024-11-26 21:16:33.571513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:15.515 [2024-11-26 21:16:33.571522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.515 [2024-11-26 21:16:33.573489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.515 [2024-11-26 21:16:33.573576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:15.515 BaseBdev3 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.515 [2024-11-26 21:16:33.583497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.515 [2024-11-26 21:16:33.585250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.515 [2024-11-26 21:16:33.585318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.515 [2024-11-26 21:16:33.585506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.515 [2024-11-26 21:16:33.585518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.515 [2024-11-26 21:16:33.585759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:15.515 [2024-11-26 21:16:33.585906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.515 [2024-11-26 21:16:33.585919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:15.515 [2024-11-26 21:16:33.586074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.515 "name": "raid_bdev1", 00:09:15.515 "uuid": "73670523-85ac-410c-92de-b200bd2bcf79", 00:09:15.515 "strip_size_kb": 64, 00:09:15.515 "state": "online", 00:09:15.515 "raid_level": "concat", 00:09:15.515 "superblock": true, 00:09:15.515 "num_base_bdevs": 3, 00:09:15.515 "num_base_bdevs_discovered": 3, 00:09:15.515 "num_base_bdevs_operational": 3, 00:09:15.515 "base_bdevs_list": [ 00:09:15.515 { 00:09:15.515 "name": "BaseBdev1", 00:09:15.515 "uuid": "a9fd045d-cfef-5fbd-87b1-0de515b2fa8c", 00:09:15.515 "is_configured": true, 00:09:15.515 "data_offset": 2048, 00:09:15.515 "data_size": 63488 00:09:15.515 }, 00:09:15.515 { 00:09:15.515 "name": "BaseBdev2", 00:09:15.515 "uuid": "7cd73aa8-63df-5750-9cce-7ee529d92618", 00:09:15.515 "is_configured": true, 00:09:15.515 "data_offset": 2048, 00:09:15.515 "data_size": 63488 00:09:15.515 }, 00:09:15.515 { 00:09:15.515 "name": "BaseBdev3", 00:09:15.515 "uuid": "da31e180-dd3e-524e-bc1d-578acd08e84b", 00:09:15.515 "is_configured": true, 00:09:15.515 "data_offset": 2048, 00:09:15.515 "data_size": 63488 00:09:15.515 } 00:09:15.515 ] 00:09:15.515 }' 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.515 21:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.083 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:16.083 21:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:16.083 [2024-11-26 21:16:34.083933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:17.022 21:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:17.022 21:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.022 21:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.022 "name": "raid_bdev1", 00:09:17.022 "uuid": "73670523-85ac-410c-92de-b200bd2bcf79", 00:09:17.022 "strip_size_kb": 64, 00:09:17.022 "state": "online", 00:09:17.022 "raid_level": "concat", 00:09:17.022 "superblock": true, 00:09:17.022 "num_base_bdevs": 3, 00:09:17.022 "num_base_bdevs_discovered": 3, 00:09:17.022 "num_base_bdevs_operational": 3, 00:09:17.022 "base_bdevs_list": [ 00:09:17.022 { 00:09:17.022 "name": "BaseBdev1", 00:09:17.022 "uuid": "a9fd045d-cfef-5fbd-87b1-0de515b2fa8c", 00:09:17.022 "is_configured": true, 00:09:17.022 "data_offset": 2048, 00:09:17.022 "data_size": 63488 00:09:17.022 }, 00:09:17.022 { 00:09:17.022 "name": "BaseBdev2", 00:09:17.022 "uuid": "7cd73aa8-63df-5750-9cce-7ee529d92618", 00:09:17.022 "is_configured": true, 00:09:17.022 "data_offset": 2048, 00:09:17.022 "data_size": 63488 00:09:17.022 }, 00:09:17.022 { 00:09:17.022 "name": "BaseBdev3", 00:09:17.022 "uuid": "da31e180-dd3e-524e-bc1d-578acd08e84b", 00:09:17.022 "is_configured": true, 00:09:17.022 "data_offset": 2048, 00:09:17.022 "data_size": 63488 00:09:17.022 } 00:09:17.022 ] 00:09:17.022 }' 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.022 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.281 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.281 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.281 [2024-11-26 21:16:35.431616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.281 [2024-11-26 21:16:35.431649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.281 [2024-11-26 21:16:35.434334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.281 [2024-11-26 21:16:35.434394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.281 [2024-11-26 21:16:35.434432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.281 [2024-11-26 21:16:35.434442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:17.540 { 00:09:17.540 "results": [ 00:09:17.540 { 00:09:17.540 "job": "raid_bdev1", 00:09:17.540 "core_mask": "0x1", 00:09:17.540 "workload": "randrw", 00:09:17.540 "percentage": 50, 00:09:17.540 "status": "finished", 00:09:17.540 "queue_depth": 1, 00:09:17.540 "io_size": 131072, 00:09:17.540 "runtime": 1.348634, 00:09:17.540 "iops": 15991.736824075324, 00:09:17.540 "mibps": 1998.9671030094155, 00:09:17.540 "io_failed": 1, 00:09:17.540 "io_timeout": 0, 00:09:17.540 "avg_latency_us": 86.48449328132897, 00:09:17.540 "min_latency_us": 24.705676855895195, 00:09:17.540 "max_latency_us": 1674.172925764192 00:09:17.540 } 00:09:17.540 ], 00:09:17.540 "core_count": 1 00:09:17.540 } 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67087 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67087 ']' 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67087 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67087 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.540 killing process with pid 67087 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67087' 00:09:17.540 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67087 00:09:17.540 [2024-11-26 21:16:35.465111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.541 21:16:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67087 00:09:17.541 [2024-11-26 21:16:35.689834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xrsRYVetcv 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:18.921 ************************************ 00:09:18.921 END TEST raid_write_error_test 00:09:18.921 ************************************ 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:18.921 00:09:18.921 real 0m4.408s 00:09:18.921 user 0m5.170s 00:09:18.921 sys 0m0.535s 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.921 21:16:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.921 21:16:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:18.921 21:16:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:18.921 21:16:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.921 21:16:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.921 21:16:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.921 ************************************ 00:09:18.921 START TEST raid_state_function_test 00:09:18.921 ************************************ 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67231 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67231' 00:09:18.921 Process raid pid: 67231 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67231 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67231 ']' 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.921 21:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.921 [2024-11-26 21:16:37.025555] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:18.921 [2024-11-26 21:16:37.025753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.181 [2024-11-26 21:16:37.184013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.181 [2024-11-26 21:16:37.297144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.442 [2024-11-26 21:16:37.501479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.442 [2024-11-26 21:16:37.501595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.701 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.701 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:19.701 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.701 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.701 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.960 [2024-11-26 21:16:37.859577] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.961 [2024-11-26 21:16:37.859632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.961 [2024-11-26 21:16:37.859642] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.961 [2024-11-26 21:16:37.859652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.961 [2024-11-26 21:16:37.859658] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.961 [2024-11-26 21:16:37.859666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.961 "name": "Existed_Raid", 00:09:19.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.961 "strip_size_kb": 0, 00:09:19.961 "state": "configuring", 00:09:19.961 "raid_level": "raid1", 00:09:19.961 "superblock": false, 00:09:19.961 "num_base_bdevs": 3, 00:09:19.961 "num_base_bdevs_discovered": 0, 00:09:19.961 "num_base_bdevs_operational": 3, 00:09:19.961 "base_bdevs_list": [ 00:09:19.961 { 00:09:19.961 "name": "BaseBdev1", 00:09:19.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.961 "is_configured": false, 00:09:19.961 "data_offset": 0, 00:09:19.961 "data_size": 0 00:09:19.961 }, 00:09:19.961 { 00:09:19.961 "name": "BaseBdev2", 00:09:19.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.961 "is_configured": false, 00:09:19.961 "data_offset": 0, 00:09:19.961 "data_size": 0 00:09:19.961 }, 00:09:19.961 { 00:09:19.961 "name": "BaseBdev3", 00:09:19.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.961 "is_configured": false, 00:09:19.961 "data_offset": 0, 00:09:19.961 "data_size": 0 00:09:19.961 } 00:09:19.961 ] 00:09:19.961 }' 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.961 21:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 [2024-11-26 21:16:38.338706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.222 [2024-11-26 21:16:38.338791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 [2024-11-26 21:16:38.350678] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.222 [2024-11-26 21:16:38.350723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.222 [2024-11-26 21:16:38.350733] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.222 [2024-11-26 21:16:38.350742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.222 [2024-11-26 21:16:38.350748] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.222 [2024-11-26 21:16:38.350757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.222 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.482 [2024-11-26 21:16:38.397470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.482 BaseBdev1 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.482 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.482 [ 00:09:20.482 { 00:09:20.482 "name": "BaseBdev1", 00:09:20.482 "aliases": [ 00:09:20.482 "6afd7f12-cc4f-453a-a7fb-f5347adb858d" 00:09:20.482 ], 00:09:20.482 "product_name": "Malloc disk", 00:09:20.482 "block_size": 512, 00:09:20.482 "num_blocks": 65536, 00:09:20.482 "uuid": "6afd7f12-cc4f-453a-a7fb-f5347adb858d", 00:09:20.482 "assigned_rate_limits": { 00:09:20.482 "rw_ios_per_sec": 0, 00:09:20.482 "rw_mbytes_per_sec": 0, 00:09:20.482 "r_mbytes_per_sec": 0, 00:09:20.483 "w_mbytes_per_sec": 0 00:09:20.483 }, 00:09:20.483 "claimed": true, 00:09:20.483 "claim_type": "exclusive_write", 00:09:20.483 "zoned": false, 00:09:20.483 "supported_io_types": { 00:09:20.483 "read": true, 00:09:20.483 "write": true, 00:09:20.483 "unmap": true, 00:09:20.483 "flush": true, 00:09:20.483 "reset": true, 00:09:20.483 "nvme_admin": false, 00:09:20.483 "nvme_io": false, 00:09:20.483 "nvme_io_md": false, 00:09:20.483 "write_zeroes": true, 00:09:20.483 "zcopy": true, 00:09:20.483 "get_zone_info": false, 00:09:20.483 "zone_management": false, 00:09:20.483 "zone_append": false, 00:09:20.483 "compare": false, 00:09:20.483 "compare_and_write": false, 00:09:20.483 "abort": true, 00:09:20.483 "seek_hole": false, 00:09:20.483 "seek_data": false, 00:09:20.483 "copy": true, 00:09:20.483 "nvme_iov_md": false 00:09:20.483 }, 00:09:20.483 "memory_domains": [ 00:09:20.483 { 00:09:20.483 "dma_device_id": "system", 00:09:20.483 "dma_device_type": 1 00:09:20.483 }, 00:09:20.483 { 00:09:20.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.483 "dma_device_type": 2 00:09:20.483 } 00:09:20.483 ], 00:09:20.483 "driver_specific": {} 00:09:20.483 } 00:09:20.483 ] 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.483 "name": "Existed_Raid", 00:09:20.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.483 "strip_size_kb": 0, 00:09:20.483 "state": "configuring", 00:09:20.483 "raid_level": "raid1", 00:09:20.483 "superblock": false, 00:09:20.483 "num_base_bdevs": 3, 00:09:20.483 "num_base_bdevs_discovered": 1, 00:09:20.483 "num_base_bdevs_operational": 3, 00:09:20.483 "base_bdevs_list": [ 00:09:20.483 { 00:09:20.483 "name": "BaseBdev1", 00:09:20.483 "uuid": "6afd7f12-cc4f-453a-a7fb-f5347adb858d", 00:09:20.483 "is_configured": true, 00:09:20.483 "data_offset": 0, 00:09:20.483 "data_size": 65536 00:09:20.483 }, 00:09:20.483 { 00:09:20.483 "name": "BaseBdev2", 00:09:20.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.483 "is_configured": false, 00:09:20.483 "data_offset": 0, 00:09:20.483 "data_size": 0 00:09:20.483 }, 00:09:20.483 { 00:09:20.483 "name": "BaseBdev3", 00:09:20.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.483 "is_configured": false, 00:09:20.483 "data_offset": 0, 00:09:20.483 "data_size": 0 00:09:20.483 } 00:09:20.483 ] 00:09:20.483 }' 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.483 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.743 [2024-11-26 21:16:38.801157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.743 [2024-11-26 21:16:38.801330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.743 [2024-11-26 21:16:38.813072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.743 [2024-11-26 21:16:38.816307] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.743 [2024-11-26 21:16:38.816468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.743 [2024-11-26 21:16:38.816493] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.743 [2024-11-26 21:16:38.816509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.743 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.743 "name": "Existed_Raid", 00:09:20.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.743 "strip_size_kb": 0, 00:09:20.743 "state": "configuring", 00:09:20.743 "raid_level": "raid1", 00:09:20.743 "superblock": false, 00:09:20.743 "num_base_bdevs": 3, 00:09:20.743 "num_base_bdevs_discovered": 1, 00:09:20.743 "num_base_bdevs_operational": 3, 00:09:20.743 "base_bdevs_list": [ 00:09:20.743 { 00:09:20.743 "name": "BaseBdev1", 00:09:20.743 "uuid": "6afd7f12-cc4f-453a-a7fb-f5347adb858d", 00:09:20.743 "is_configured": true, 00:09:20.743 "data_offset": 0, 00:09:20.743 "data_size": 65536 00:09:20.743 }, 00:09:20.743 { 00:09:20.743 "name": "BaseBdev2", 00:09:20.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.743 "is_configured": false, 00:09:20.743 "data_offset": 0, 00:09:20.743 "data_size": 0 00:09:20.743 }, 00:09:20.743 { 00:09:20.743 "name": "BaseBdev3", 00:09:20.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.743 "is_configured": false, 00:09:20.743 "data_offset": 0, 00:09:20.744 "data_size": 0 00:09:20.744 } 00:09:20.744 ] 00:09:20.744 }' 00:09:20.744 21:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.744 21:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 [2024-11-26 21:16:39.321860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.312 BaseBdev2 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 [ 00:09:21.312 { 00:09:21.312 "name": "BaseBdev2", 00:09:21.312 "aliases": [ 00:09:21.312 "9d43a406-3cdc-4223-9adb-1348ab07472e" 00:09:21.312 ], 00:09:21.312 "product_name": "Malloc disk", 00:09:21.312 "block_size": 512, 00:09:21.312 "num_blocks": 65536, 00:09:21.312 "uuid": "9d43a406-3cdc-4223-9adb-1348ab07472e", 00:09:21.312 "assigned_rate_limits": { 00:09:21.312 "rw_ios_per_sec": 0, 00:09:21.312 "rw_mbytes_per_sec": 0, 00:09:21.312 "r_mbytes_per_sec": 0, 00:09:21.312 "w_mbytes_per_sec": 0 00:09:21.312 }, 00:09:21.312 "claimed": true, 00:09:21.312 "claim_type": "exclusive_write", 00:09:21.312 "zoned": false, 00:09:21.312 "supported_io_types": { 00:09:21.312 "read": true, 00:09:21.312 "write": true, 00:09:21.312 "unmap": true, 00:09:21.312 "flush": true, 00:09:21.312 "reset": true, 00:09:21.312 "nvme_admin": false, 00:09:21.312 "nvme_io": false, 00:09:21.312 "nvme_io_md": false, 00:09:21.312 "write_zeroes": true, 00:09:21.312 "zcopy": true, 00:09:21.312 "get_zone_info": false, 00:09:21.312 "zone_management": false, 00:09:21.312 "zone_append": false, 00:09:21.312 "compare": false, 00:09:21.312 "compare_and_write": false, 00:09:21.312 "abort": true, 00:09:21.312 "seek_hole": false, 00:09:21.312 "seek_data": false, 00:09:21.312 "copy": true, 00:09:21.312 "nvme_iov_md": false 00:09:21.312 }, 00:09:21.312 "memory_domains": [ 00:09:21.312 { 00:09:21.312 "dma_device_id": "system", 00:09:21.312 "dma_device_type": 1 00:09:21.312 }, 00:09:21.312 { 00:09:21.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.312 "dma_device_type": 2 00:09:21.312 } 00:09:21.312 ], 00:09:21.312 "driver_specific": {} 00:09:21.312 } 00:09:21.312 ] 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.312 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.313 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.313 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.313 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.313 "name": "Existed_Raid", 00:09:21.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.313 "strip_size_kb": 0, 00:09:21.313 "state": "configuring", 00:09:21.313 "raid_level": "raid1", 00:09:21.313 "superblock": false, 00:09:21.313 "num_base_bdevs": 3, 00:09:21.313 "num_base_bdevs_discovered": 2, 00:09:21.313 "num_base_bdevs_operational": 3, 00:09:21.313 "base_bdevs_list": [ 00:09:21.313 { 00:09:21.313 "name": "BaseBdev1", 00:09:21.313 "uuid": "6afd7f12-cc4f-453a-a7fb-f5347adb858d", 00:09:21.313 "is_configured": true, 00:09:21.313 "data_offset": 0, 00:09:21.313 "data_size": 65536 00:09:21.313 }, 00:09:21.313 { 00:09:21.313 "name": "BaseBdev2", 00:09:21.313 "uuid": "9d43a406-3cdc-4223-9adb-1348ab07472e", 00:09:21.313 "is_configured": true, 00:09:21.313 "data_offset": 0, 00:09:21.313 "data_size": 65536 00:09:21.313 }, 00:09:21.313 { 00:09:21.313 "name": "BaseBdev3", 00:09:21.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.313 "is_configured": false, 00:09:21.313 "data_offset": 0, 00:09:21.313 "data_size": 0 00:09:21.313 } 00:09:21.313 ] 00:09:21.313 }' 00:09:21.313 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.313 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.882 [2024-11-26 21:16:39.815251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.882 [2024-11-26 21:16:39.815298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.882 [2024-11-26 21:16:39.815312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:21.882 [2024-11-26 21:16:39.815715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.882 [2024-11-26 21:16:39.815900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.882 [2024-11-26 21:16:39.815918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.882 [2024-11-26 21:16:39.816204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.882 BaseBdev3 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.882 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.883 [ 00:09:21.883 { 00:09:21.883 "name": "BaseBdev3", 00:09:21.883 "aliases": [ 00:09:21.883 "d2f09d17-c3bd-496a-b807-e4602ba8d585" 00:09:21.883 ], 00:09:21.883 "product_name": "Malloc disk", 00:09:21.883 "block_size": 512, 00:09:21.883 "num_blocks": 65536, 00:09:21.883 "uuid": "d2f09d17-c3bd-496a-b807-e4602ba8d585", 00:09:21.883 "assigned_rate_limits": { 00:09:21.883 "rw_ios_per_sec": 0, 00:09:21.883 "rw_mbytes_per_sec": 0, 00:09:21.883 "r_mbytes_per_sec": 0, 00:09:21.883 "w_mbytes_per_sec": 0 00:09:21.883 }, 00:09:21.883 "claimed": true, 00:09:21.883 "claim_type": "exclusive_write", 00:09:21.883 "zoned": false, 00:09:21.883 "supported_io_types": { 00:09:21.883 "read": true, 00:09:21.883 "write": true, 00:09:21.883 "unmap": true, 00:09:21.883 "flush": true, 00:09:21.883 "reset": true, 00:09:21.883 "nvme_admin": false, 00:09:21.883 "nvme_io": false, 00:09:21.883 "nvme_io_md": false, 00:09:21.883 "write_zeroes": true, 00:09:21.883 "zcopy": true, 00:09:21.883 "get_zone_info": false, 00:09:21.883 "zone_management": false, 00:09:21.883 "zone_append": false, 00:09:21.883 "compare": false, 00:09:21.883 "compare_and_write": false, 00:09:21.883 "abort": true, 00:09:21.883 "seek_hole": false, 00:09:21.883 "seek_data": false, 00:09:21.883 "copy": true, 00:09:21.883 "nvme_iov_md": false 00:09:21.883 }, 00:09:21.883 "memory_domains": [ 00:09:21.883 { 00:09:21.883 "dma_device_id": "system", 00:09:21.883 "dma_device_type": 1 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.883 "dma_device_type": 2 00:09:21.883 } 00:09:21.883 ], 00:09:21.883 "driver_specific": {} 00:09:21.883 } 00:09:21.883 ] 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.883 "name": "Existed_Raid", 00:09:21.883 "uuid": "da0c4d66-d115-443d-a95e-e8562d4df9a2", 00:09:21.883 "strip_size_kb": 0, 00:09:21.883 "state": "online", 00:09:21.883 "raid_level": "raid1", 00:09:21.883 "superblock": false, 00:09:21.883 "num_base_bdevs": 3, 00:09:21.883 "num_base_bdevs_discovered": 3, 00:09:21.883 "num_base_bdevs_operational": 3, 00:09:21.883 "base_bdevs_list": [ 00:09:21.883 { 00:09:21.883 "name": "BaseBdev1", 00:09:21.883 "uuid": "6afd7f12-cc4f-453a-a7fb-f5347adb858d", 00:09:21.883 "is_configured": true, 00:09:21.883 "data_offset": 0, 00:09:21.883 "data_size": 65536 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "name": "BaseBdev2", 00:09:21.883 "uuid": "9d43a406-3cdc-4223-9adb-1348ab07472e", 00:09:21.883 "is_configured": true, 00:09:21.883 "data_offset": 0, 00:09:21.883 "data_size": 65536 00:09:21.883 }, 00:09:21.883 { 00:09:21.883 "name": "BaseBdev3", 00:09:21.883 "uuid": "d2f09d17-c3bd-496a-b807-e4602ba8d585", 00:09:21.883 "is_configured": true, 00:09:21.883 "data_offset": 0, 00:09:21.883 "data_size": 65536 00:09:21.883 } 00:09:21.883 ] 00:09:21.883 }' 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.883 21:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.143 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.143 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.143 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.143 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.143 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.143 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.403 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.403 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.403 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.403 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.403 [2024-11-26 21:16:40.302749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.403 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.403 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.403 "name": "Existed_Raid", 00:09:22.403 "aliases": [ 00:09:22.403 "da0c4d66-d115-443d-a95e-e8562d4df9a2" 00:09:22.403 ], 00:09:22.403 "product_name": "Raid Volume", 00:09:22.403 "block_size": 512, 00:09:22.403 "num_blocks": 65536, 00:09:22.403 "uuid": "da0c4d66-d115-443d-a95e-e8562d4df9a2", 00:09:22.403 "assigned_rate_limits": { 00:09:22.403 "rw_ios_per_sec": 0, 00:09:22.403 "rw_mbytes_per_sec": 0, 00:09:22.403 "r_mbytes_per_sec": 0, 00:09:22.403 "w_mbytes_per_sec": 0 00:09:22.403 }, 00:09:22.403 "claimed": false, 00:09:22.403 "zoned": false, 00:09:22.403 "supported_io_types": { 00:09:22.403 "read": true, 00:09:22.403 "write": true, 00:09:22.403 "unmap": false, 00:09:22.403 "flush": false, 00:09:22.403 "reset": true, 00:09:22.403 "nvme_admin": false, 00:09:22.403 "nvme_io": false, 00:09:22.403 "nvme_io_md": false, 00:09:22.403 "write_zeroes": true, 00:09:22.403 "zcopy": false, 00:09:22.403 "get_zone_info": false, 00:09:22.403 "zone_management": false, 00:09:22.403 "zone_append": false, 00:09:22.403 "compare": false, 00:09:22.403 "compare_and_write": false, 00:09:22.403 "abort": false, 00:09:22.403 "seek_hole": false, 00:09:22.403 "seek_data": false, 00:09:22.403 "copy": false, 00:09:22.403 "nvme_iov_md": false 00:09:22.403 }, 00:09:22.403 "memory_domains": [ 00:09:22.403 { 00:09:22.403 "dma_device_id": "system", 00:09:22.403 "dma_device_type": 1 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.403 "dma_device_type": 2 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "dma_device_id": "system", 00:09:22.403 "dma_device_type": 1 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.403 "dma_device_type": 2 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "dma_device_id": "system", 00:09:22.403 "dma_device_type": 1 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.403 "dma_device_type": 2 00:09:22.403 } 00:09:22.403 ], 00:09:22.403 "driver_specific": { 00:09:22.403 "raid": { 00:09:22.403 "uuid": "da0c4d66-d115-443d-a95e-e8562d4df9a2", 00:09:22.403 "strip_size_kb": 0, 00:09:22.403 "state": "online", 00:09:22.403 "raid_level": "raid1", 00:09:22.403 "superblock": false, 00:09:22.403 "num_base_bdevs": 3, 00:09:22.403 "num_base_bdevs_discovered": 3, 00:09:22.403 "num_base_bdevs_operational": 3, 00:09:22.403 "base_bdevs_list": [ 00:09:22.403 { 00:09:22.403 "name": "BaseBdev1", 00:09:22.403 "uuid": "6afd7f12-cc4f-453a-a7fb-f5347adb858d", 00:09:22.403 "is_configured": true, 00:09:22.403 "data_offset": 0, 00:09:22.403 "data_size": 65536 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "name": "BaseBdev2", 00:09:22.403 "uuid": "9d43a406-3cdc-4223-9adb-1348ab07472e", 00:09:22.403 "is_configured": true, 00:09:22.403 "data_offset": 0, 00:09:22.403 "data_size": 65536 00:09:22.403 }, 00:09:22.403 { 00:09:22.403 "name": "BaseBdev3", 00:09:22.403 "uuid": "d2f09d17-c3bd-496a-b807-e4602ba8d585", 00:09:22.404 "is_configured": true, 00:09:22.404 "data_offset": 0, 00:09:22.404 "data_size": 65536 00:09:22.404 } 00:09:22.404 ] 00:09:22.404 } 00:09:22.404 } 00:09:22.404 }' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:22.404 BaseBdev2 00:09:22.404 BaseBdev3' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.404 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.663 [2024-11-26 21:16:40.601977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.663 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.664 "name": "Existed_Raid", 00:09:22.664 "uuid": "da0c4d66-d115-443d-a95e-e8562d4df9a2", 00:09:22.664 "strip_size_kb": 0, 00:09:22.664 "state": "online", 00:09:22.664 "raid_level": "raid1", 00:09:22.664 "superblock": false, 00:09:22.664 "num_base_bdevs": 3, 00:09:22.664 "num_base_bdevs_discovered": 2, 00:09:22.664 "num_base_bdevs_operational": 2, 00:09:22.664 "base_bdevs_list": [ 00:09:22.664 { 00:09:22.664 "name": null, 00:09:22.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.664 "is_configured": false, 00:09:22.664 "data_offset": 0, 00:09:22.664 "data_size": 65536 00:09:22.664 }, 00:09:22.664 { 00:09:22.664 "name": "BaseBdev2", 00:09:22.664 "uuid": "9d43a406-3cdc-4223-9adb-1348ab07472e", 00:09:22.664 "is_configured": true, 00:09:22.664 "data_offset": 0, 00:09:22.664 "data_size": 65536 00:09:22.664 }, 00:09:22.664 { 00:09:22.664 "name": "BaseBdev3", 00:09:22.664 "uuid": "d2f09d17-c3bd-496a-b807-e4602ba8d585", 00:09:22.664 "is_configured": true, 00:09:22.664 "data_offset": 0, 00:09:22.664 "data_size": 65536 00:09:22.664 } 00:09:22.664 ] 00:09:22.664 }' 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.664 21:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.232 [2024-11-26 21:16:41.180872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.232 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.232 [2024-11-26 21:16:41.327249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.232 [2024-11-26 21:16:41.327345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.538 [2024-11-26 21:16:41.420533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.539 [2024-11-26 21:16:41.420657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.539 [2024-11-26 21:16:41.420700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.539 BaseBdev2 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.539 [ 00:09:23.539 { 00:09:23.539 "name": "BaseBdev2", 00:09:23.539 "aliases": [ 00:09:23.539 "d7e498ce-83d0-4e76-87a7-3754a8f1b964" 00:09:23.539 ], 00:09:23.539 "product_name": "Malloc disk", 00:09:23.539 "block_size": 512, 00:09:23.539 "num_blocks": 65536, 00:09:23.539 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:23.539 "assigned_rate_limits": { 00:09:23.539 "rw_ios_per_sec": 0, 00:09:23.539 "rw_mbytes_per_sec": 0, 00:09:23.539 "r_mbytes_per_sec": 0, 00:09:23.539 "w_mbytes_per_sec": 0 00:09:23.539 }, 00:09:23.539 "claimed": false, 00:09:23.539 "zoned": false, 00:09:23.539 "supported_io_types": { 00:09:23.539 "read": true, 00:09:23.539 "write": true, 00:09:23.539 "unmap": true, 00:09:23.539 "flush": true, 00:09:23.539 "reset": true, 00:09:23.539 "nvme_admin": false, 00:09:23.539 "nvme_io": false, 00:09:23.539 "nvme_io_md": false, 00:09:23.539 "write_zeroes": true, 00:09:23.539 "zcopy": true, 00:09:23.539 "get_zone_info": false, 00:09:23.539 "zone_management": false, 00:09:23.539 "zone_append": false, 00:09:23.539 "compare": false, 00:09:23.539 "compare_and_write": false, 00:09:23.539 "abort": true, 00:09:23.539 "seek_hole": false, 00:09:23.539 "seek_data": false, 00:09:23.539 "copy": true, 00:09:23.539 "nvme_iov_md": false 00:09:23.539 }, 00:09:23.539 "memory_domains": [ 00:09:23.539 { 00:09:23.539 "dma_device_id": "system", 00:09:23.539 "dma_device_type": 1 00:09:23.539 }, 00:09:23.539 { 00:09:23.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.539 "dma_device_type": 2 00:09:23.539 } 00:09:23.539 ], 00:09:23.539 "driver_specific": {} 00:09:23.539 } 00:09:23.539 ] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.539 BaseBdev3 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.539 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.540 [ 00:09:23.540 { 00:09:23.540 "name": "BaseBdev3", 00:09:23.540 "aliases": [ 00:09:23.540 "dfd90c91-4bfb-4058-a996-5b434f12986d" 00:09:23.540 ], 00:09:23.540 "product_name": "Malloc disk", 00:09:23.540 "block_size": 512, 00:09:23.540 "num_blocks": 65536, 00:09:23.540 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:23.540 "assigned_rate_limits": { 00:09:23.540 "rw_ios_per_sec": 0, 00:09:23.540 "rw_mbytes_per_sec": 0, 00:09:23.540 "r_mbytes_per_sec": 0, 00:09:23.540 "w_mbytes_per_sec": 0 00:09:23.540 }, 00:09:23.540 "claimed": false, 00:09:23.540 "zoned": false, 00:09:23.540 "supported_io_types": { 00:09:23.540 "read": true, 00:09:23.540 "write": true, 00:09:23.540 "unmap": true, 00:09:23.540 "flush": true, 00:09:23.540 "reset": true, 00:09:23.540 "nvme_admin": false, 00:09:23.540 "nvme_io": false, 00:09:23.540 "nvme_io_md": false, 00:09:23.540 "write_zeroes": true, 00:09:23.540 "zcopy": true, 00:09:23.540 "get_zone_info": false, 00:09:23.540 "zone_management": false, 00:09:23.540 "zone_append": false, 00:09:23.540 "compare": false, 00:09:23.540 "compare_and_write": false, 00:09:23.540 "abort": true, 00:09:23.540 "seek_hole": false, 00:09:23.540 "seek_data": false, 00:09:23.540 "copy": true, 00:09:23.540 "nvme_iov_md": false 00:09:23.540 }, 00:09:23.540 "memory_domains": [ 00:09:23.540 { 00:09:23.540 "dma_device_id": "system", 00:09:23.540 "dma_device_type": 1 00:09:23.540 }, 00:09:23.540 { 00:09:23.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.540 "dma_device_type": 2 00:09:23.540 } 00:09:23.540 ], 00:09:23.540 "driver_specific": {} 00:09:23.540 } 00:09:23.540 ] 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.540 [2024-11-26 21:16:41.635017] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.540 [2024-11-26 21:16:41.635061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.540 [2024-11-26 21:16:41.635082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.540 [2024-11-26 21:16:41.636862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.540 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.540 "name": "Existed_Raid", 00:09:23.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.540 "strip_size_kb": 0, 00:09:23.540 "state": "configuring", 00:09:23.540 "raid_level": "raid1", 00:09:23.540 "superblock": false, 00:09:23.540 "num_base_bdevs": 3, 00:09:23.540 "num_base_bdevs_discovered": 2, 00:09:23.540 "num_base_bdevs_operational": 3, 00:09:23.540 "base_bdevs_list": [ 00:09:23.540 { 00:09:23.540 "name": "BaseBdev1", 00:09:23.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.540 "is_configured": false, 00:09:23.540 "data_offset": 0, 00:09:23.540 "data_size": 0 00:09:23.540 }, 00:09:23.540 { 00:09:23.540 "name": "BaseBdev2", 00:09:23.540 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:23.540 "is_configured": true, 00:09:23.540 "data_offset": 0, 00:09:23.540 "data_size": 65536 00:09:23.540 }, 00:09:23.540 { 00:09:23.540 "name": "BaseBdev3", 00:09:23.540 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:23.540 "is_configured": true, 00:09:23.540 "data_offset": 0, 00:09:23.540 "data_size": 65536 00:09:23.540 } 00:09:23.540 ] 00:09:23.540 }' 00:09:23.541 21:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.541 21:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 [2024-11-26 21:16:42.070292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.121 "name": "Existed_Raid", 00:09:24.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.121 "strip_size_kb": 0, 00:09:24.121 "state": "configuring", 00:09:24.121 "raid_level": "raid1", 00:09:24.121 "superblock": false, 00:09:24.121 "num_base_bdevs": 3, 00:09:24.121 "num_base_bdevs_discovered": 1, 00:09:24.121 "num_base_bdevs_operational": 3, 00:09:24.121 "base_bdevs_list": [ 00:09:24.121 { 00:09:24.121 "name": "BaseBdev1", 00:09:24.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.121 "is_configured": false, 00:09:24.121 "data_offset": 0, 00:09:24.121 "data_size": 0 00:09:24.121 }, 00:09:24.121 { 00:09:24.121 "name": null, 00:09:24.121 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:24.121 "is_configured": false, 00:09:24.121 "data_offset": 0, 00:09:24.121 "data_size": 65536 00:09:24.121 }, 00:09:24.121 { 00:09:24.121 "name": "BaseBdev3", 00:09:24.121 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:24.121 "is_configured": true, 00:09:24.121 "data_offset": 0, 00:09:24.121 "data_size": 65536 00:09:24.121 } 00:09:24.121 ] 00:09:24.121 }' 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.121 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.380 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.640 [2024-11-26 21:16:42.570529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.640 BaseBdev1 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.640 [ 00:09:24.640 { 00:09:24.640 "name": "BaseBdev1", 00:09:24.640 "aliases": [ 00:09:24.640 "0d3e5c57-e750-46df-8c63-5a9338ad6e6c" 00:09:24.640 ], 00:09:24.640 "product_name": "Malloc disk", 00:09:24.640 "block_size": 512, 00:09:24.640 "num_blocks": 65536, 00:09:24.640 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:24.640 "assigned_rate_limits": { 00:09:24.640 "rw_ios_per_sec": 0, 00:09:24.640 "rw_mbytes_per_sec": 0, 00:09:24.640 "r_mbytes_per_sec": 0, 00:09:24.640 "w_mbytes_per_sec": 0 00:09:24.640 }, 00:09:24.640 "claimed": true, 00:09:24.640 "claim_type": "exclusive_write", 00:09:24.640 "zoned": false, 00:09:24.640 "supported_io_types": { 00:09:24.640 "read": true, 00:09:24.640 "write": true, 00:09:24.640 "unmap": true, 00:09:24.640 "flush": true, 00:09:24.640 "reset": true, 00:09:24.640 "nvme_admin": false, 00:09:24.640 "nvme_io": false, 00:09:24.640 "nvme_io_md": false, 00:09:24.640 "write_zeroes": true, 00:09:24.640 "zcopy": true, 00:09:24.640 "get_zone_info": false, 00:09:24.640 "zone_management": false, 00:09:24.640 "zone_append": false, 00:09:24.640 "compare": false, 00:09:24.640 "compare_and_write": false, 00:09:24.640 "abort": true, 00:09:24.640 "seek_hole": false, 00:09:24.640 "seek_data": false, 00:09:24.640 "copy": true, 00:09:24.640 "nvme_iov_md": false 00:09:24.640 }, 00:09:24.640 "memory_domains": [ 00:09:24.640 { 00:09:24.640 "dma_device_id": "system", 00:09:24.640 "dma_device_type": 1 00:09:24.640 }, 00:09:24.640 { 00:09:24.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.640 "dma_device_type": 2 00:09:24.640 } 00:09:24.640 ], 00:09:24.640 "driver_specific": {} 00:09:24.640 } 00:09:24.640 ] 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.640 "name": "Existed_Raid", 00:09:24.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.640 "strip_size_kb": 0, 00:09:24.640 "state": "configuring", 00:09:24.640 "raid_level": "raid1", 00:09:24.640 "superblock": false, 00:09:24.640 "num_base_bdevs": 3, 00:09:24.640 "num_base_bdevs_discovered": 2, 00:09:24.640 "num_base_bdevs_operational": 3, 00:09:24.640 "base_bdevs_list": [ 00:09:24.640 { 00:09:24.640 "name": "BaseBdev1", 00:09:24.640 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:24.640 "is_configured": true, 00:09:24.640 "data_offset": 0, 00:09:24.640 "data_size": 65536 00:09:24.640 }, 00:09:24.640 { 00:09:24.640 "name": null, 00:09:24.640 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:24.640 "is_configured": false, 00:09:24.640 "data_offset": 0, 00:09:24.640 "data_size": 65536 00:09:24.640 }, 00:09:24.640 { 00:09:24.640 "name": "BaseBdev3", 00:09:24.640 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:24.640 "is_configured": true, 00:09:24.640 "data_offset": 0, 00:09:24.640 "data_size": 65536 00:09:24.640 } 00:09:24.640 ] 00:09:24.640 }' 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.640 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.899 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.899 21:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.899 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.899 21:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.899 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.899 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:24.899 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:24.899 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.899 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.899 [2024-11-26 21:16:43.009804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.899 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.900 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.159 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.159 "name": "Existed_Raid", 00:09:25.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.159 "strip_size_kb": 0, 00:09:25.159 "state": "configuring", 00:09:25.159 "raid_level": "raid1", 00:09:25.159 "superblock": false, 00:09:25.159 "num_base_bdevs": 3, 00:09:25.159 "num_base_bdevs_discovered": 1, 00:09:25.159 "num_base_bdevs_operational": 3, 00:09:25.159 "base_bdevs_list": [ 00:09:25.159 { 00:09:25.159 "name": "BaseBdev1", 00:09:25.159 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:25.159 "is_configured": true, 00:09:25.159 "data_offset": 0, 00:09:25.159 "data_size": 65536 00:09:25.159 }, 00:09:25.159 { 00:09:25.159 "name": null, 00:09:25.159 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:25.159 "is_configured": false, 00:09:25.159 "data_offset": 0, 00:09:25.159 "data_size": 65536 00:09:25.159 }, 00:09:25.159 { 00:09:25.159 "name": null, 00:09:25.159 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:25.159 "is_configured": false, 00:09:25.159 "data_offset": 0, 00:09:25.159 "data_size": 65536 00:09:25.159 } 00:09:25.159 ] 00:09:25.159 }' 00:09:25.160 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.160 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.419 [2024-11-26 21:16:43.517045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.419 "name": "Existed_Raid", 00:09:25.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.419 "strip_size_kb": 0, 00:09:25.419 "state": "configuring", 00:09:25.419 "raid_level": "raid1", 00:09:25.419 "superblock": false, 00:09:25.419 "num_base_bdevs": 3, 00:09:25.419 "num_base_bdevs_discovered": 2, 00:09:25.419 "num_base_bdevs_operational": 3, 00:09:25.419 "base_bdevs_list": [ 00:09:25.419 { 00:09:25.419 "name": "BaseBdev1", 00:09:25.419 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:25.419 "is_configured": true, 00:09:25.419 "data_offset": 0, 00:09:25.419 "data_size": 65536 00:09:25.419 }, 00:09:25.419 { 00:09:25.419 "name": null, 00:09:25.419 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:25.419 "is_configured": false, 00:09:25.419 "data_offset": 0, 00:09:25.419 "data_size": 65536 00:09:25.419 }, 00:09:25.419 { 00:09:25.419 "name": "BaseBdev3", 00:09:25.419 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:25.419 "is_configured": true, 00:09:25.419 "data_offset": 0, 00:09:25.419 "data_size": 65536 00:09:25.419 } 00:09:25.419 ] 00:09:25.419 }' 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.419 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.989 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.989 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.989 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.989 21:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.989 21:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.989 [2024-11-26 21:16:44.008156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.989 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.249 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.249 "name": "Existed_Raid", 00:09:26.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.249 "strip_size_kb": 0, 00:09:26.249 "state": "configuring", 00:09:26.249 "raid_level": "raid1", 00:09:26.249 "superblock": false, 00:09:26.249 "num_base_bdevs": 3, 00:09:26.249 "num_base_bdevs_discovered": 1, 00:09:26.249 "num_base_bdevs_operational": 3, 00:09:26.249 "base_bdevs_list": [ 00:09:26.249 { 00:09:26.249 "name": null, 00:09:26.249 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:26.249 "is_configured": false, 00:09:26.249 "data_offset": 0, 00:09:26.249 "data_size": 65536 00:09:26.249 }, 00:09:26.249 { 00:09:26.249 "name": null, 00:09:26.249 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:26.249 "is_configured": false, 00:09:26.249 "data_offset": 0, 00:09:26.249 "data_size": 65536 00:09:26.249 }, 00:09:26.249 { 00:09:26.249 "name": "BaseBdev3", 00:09:26.249 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:26.249 "is_configured": true, 00:09:26.249 "data_offset": 0, 00:09:26.249 "data_size": 65536 00:09:26.249 } 00:09:26.249 ] 00:09:26.249 }' 00:09:26.249 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.249 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 [2024-11-26 21:16:44.555288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.509 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.509 "name": "Existed_Raid", 00:09:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.509 "strip_size_kb": 0, 00:09:26.509 "state": "configuring", 00:09:26.509 "raid_level": "raid1", 00:09:26.509 "superblock": false, 00:09:26.509 "num_base_bdevs": 3, 00:09:26.509 "num_base_bdevs_discovered": 2, 00:09:26.509 "num_base_bdevs_operational": 3, 00:09:26.509 "base_bdevs_list": [ 00:09:26.509 { 00:09:26.509 "name": null, 00:09:26.509 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:26.509 "is_configured": false, 00:09:26.509 "data_offset": 0, 00:09:26.509 "data_size": 65536 00:09:26.509 }, 00:09:26.509 { 00:09:26.510 "name": "BaseBdev2", 00:09:26.510 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:26.510 "is_configured": true, 00:09:26.510 "data_offset": 0, 00:09:26.510 "data_size": 65536 00:09:26.510 }, 00:09:26.510 { 00:09:26.510 "name": "BaseBdev3", 00:09:26.510 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:26.510 "is_configured": true, 00:09:26.510 "data_offset": 0, 00:09:26.510 "data_size": 65536 00:09:26.510 } 00:09:26.510 ] 00:09:26.510 }' 00:09:26.510 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.510 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.080 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.080 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.080 21:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.080 21:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0d3e5c57-e750-46df-8c63-5a9338ad6e6c 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.080 [2024-11-26 21:16:45.122887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:27.080 [2024-11-26 21:16:45.123083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.080 [2024-11-26 21:16:45.123111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:27.080 [2024-11-26 21:16:45.123392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:27.080 [2024-11-26 21:16:45.123584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.080 [2024-11-26 21:16:45.123624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:27.080 [2024-11-26 21:16:45.123943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.080 NewBaseBdev 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.080 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.080 [ 00:09:27.080 { 00:09:27.080 "name": "NewBaseBdev", 00:09:27.080 "aliases": [ 00:09:27.080 "0d3e5c57-e750-46df-8c63-5a9338ad6e6c" 00:09:27.080 ], 00:09:27.080 "product_name": "Malloc disk", 00:09:27.080 "block_size": 512, 00:09:27.080 "num_blocks": 65536, 00:09:27.080 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:27.080 "assigned_rate_limits": { 00:09:27.080 "rw_ios_per_sec": 0, 00:09:27.080 "rw_mbytes_per_sec": 0, 00:09:27.080 "r_mbytes_per_sec": 0, 00:09:27.080 "w_mbytes_per_sec": 0 00:09:27.080 }, 00:09:27.080 "claimed": true, 00:09:27.080 "claim_type": "exclusive_write", 00:09:27.080 "zoned": false, 00:09:27.080 "supported_io_types": { 00:09:27.080 "read": true, 00:09:27.080 "write": true, 00:09:27.080 "unmap": true, 00:09:27.080 "flush": true, 00:09:27.080 "reset": true, 00:09:27.080 "nvme_admin": false, 00:09:27.080 "nvme_io": false, 00:09:27.080 "nvme_io_md": false, 00:09:27.080 "write_zeroes": true, 00:09:27.081 "zcopy": true, 00:09:27.081 "get_zone_info": false, 00:09:27.081 "zone_management": false, 00:09:27.081 "zone_append": false, 00:09:27.081 "compare": false, 00:09:27.081 "compare_and_write": false, 00:09:27.081 "abort": true, 00:09:27.081 "seek_hole": false, 00:09:27.081 "seek_data": false, 00:09:27.081 "copy": true, 00:09:27.081 "nvme_iov_md": false 00:09:27.081 }, 00:09:27.081 "memory_domains": [ 00:09:27.081 { 00:09:27.081 "dma_device_id": "system", 00:09:27.081 "dma_device_type": 1 00:09:27.081 }, 00:09:27.081 { 00:09:27.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.081 "dma_device_type": 2 00:09:27.081 } 00:09:27.081 ], 00:09:27.081 "driver_specific": {} 00:09:27.081 } 00:09:27.081 ] 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.081 "name": "Existed_Raid", 00:09:27.081 "uuid": "3a2a927b-4807-4b06-856d-05898d83aceb", 00:09:27.081 "strip_size_kb": 0, 00:09:27.081 "state": "online", 00:09:27.081 "raid_level": "raid1", 00:09:27.081 "superblock": false, 00:09:27.081 "num_base_bdevs": 3, 00:09:27.081 "num_base_bdevs_discovered": 3, 00:09:27.081 "num_base_bdevs_operational": 3, 00:09:27.081 "base_bdevs_list": [ 00:09:27.081 { 00:09:27.081 "name": "NewBaseBdev", 00:09:27.081 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:27.081 "is_configured": true, 00:09:27.081 "data_offset": 0, 00:09:27.081 "data_size": 65536 00:09:27.081 }, 00:09:27.081 { 00:09:27.081 "name": "BaseBdev2", 00:09:27.081 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:27.081 "is_configured": true, 00:09:27.081 "data_offset": 0, 00:09:27.081 "data_size": 65536 00:09:27.081 }, 00:09:27.081 { 00:09:27.081 "name": "BaseBdev3", 00:09:27.081 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:27.081 "is_configured": true, 00:09:27.081 "data_offset": 0, 00:09:27.081 "data_size": 65536 00:09:27.081 } 00:09:27.081 ] 00:09:27.081 }' 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.081 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.650 [2024-11-26 21:16:45.650334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.650 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.650 "name": "Existed_Raid", 00:09:27.650 "aliases": [ 00:09:27.650 "3a2a927b-4807-4b06-856d-05898d83aceb" 00:09:27.650 ], 00:09:27.650 "product_name": "Raid Volume", 00:09:27.650 "block_size": 512, 00:09:27.650 "num_blocks": 65536, 00:09:27.650 "uuid": "3a2a927b-4807-4b06-856d-05898d83aceb", 00:09:27.650 "assigned_rate_limits": { 00:09:27.650 "rw_ios_per_sec": 0, 00:09:27.650 "rw_mbytes_per_sec": 0, 00:09:27.650 "r_mbytes_per_sec": 0, 00:09:27.650 "w_mbytes_per_sec": 0 00:09:27.650 }, 00:09:27.650 "claimed": false, 00:09:27.650 "zoned": false, 00:09:27.650 "supported_io_types": { 00:09:27.650 "read": true, 00:09:27.650 "write": true, 00:09:27.650 "unmap": false, 00:09:27.650 "flush": false, 00:09:27.650 "reset": true, 00:09:27.650 "nvme_admin": false, 00:09:27.650 "nvme_io": false, 00:09:27.650 "nvme_io_md": false, 00:09:27.650 "write_zeroes": true, 00:09:27.650 "zcopy": false, 00:09:27.650 "get_zone_info": false, 00:09:27.650 "zone_management": false, 00:09:27.650 "zone_append": false, 00:09:27.650 "compare": false, 00:09:27.650 "compare_and_write": false, 00:09:27.650 "abort": false, 00:09:27.650 "seek_hole": false, 00:09:27.650 "seek_data": false, 00:09:27.650 "copy": false, 00:09:27.650 "nvme_iov_md": false 00:09:27.650 }, 00:09:27.650 "memory_domains": [ 00:09:27.650 { 00:09:27.650 "dma_device_id": "system", 00:09:27.650 "dma_device_type": 1 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.650 "dma_device_type": 2 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "dma_device_id": "system", 00:09:27.650 "dma_device_type": 1 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.650 "dma_device_type": 2 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "dma_device_id": "system", 00:09:27.650 "dma_device_type": 1 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.650 "dma_device_type": 2 00:09:27.650 } 00:09:27.650 ], 00:09:27.650 "driver_specific": { 00:09:27.650 "raid": { 00:09:27.650 "uuid": "3a2a927b-4807-4b06-856d-05898d83aceb", 00:09:27.650 "strip_size_kb": 0, 00:09:27.650 "state": "online", 00:09:27.650 "raid_level": "raid1", 00:09:27.650 "superblock": false, 00:09:27.650 "num_base_bdevs": 3, 00:09:27.650 "num_base_bdevs_discovered": 3, 00:09:27.650 "num_base_bdevs_operational": 3, 00:09:27.650 "base_bdevs_list": [ 00:09:27.650 { 00:09:27.650 "name": "NewBaseBdev", 00:09:27.650 "uuid": "0d3e5c57-e750-46df-8c63-5a9338ad6e6c", 00:09:27.650 "is_configured": true, 00:09:27.650 "data_offset": 0, 00:09:27.650 "data_size": 65536 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "name": "BaseBdev2", 00:09:27.650 "uuid": "d7e498ce-83d0-4e76-87a7-3754a8f1b964", 00:09:27.650 "is_configured": true, 00:09:27.650 "data_offset": 0, 00:09:27.650 "data_size": 65536 00:09:27.650 }, 00:09:27.650 { 00:09:27.650 "name": "BaseBdev3", 00:09:27.650 "uuid": "dfd90c91-4bfb-4058-a996-5b434f12986d", 00:09:27.650 "is_configured": true, 00:09:27.650 "data_offset": 0, 00:09:27.651 "data_size": 65536 00:09:27.651 } 00:09:27.651 ] 00:09:27.651 } 00:09:27.651 } 00:09:27.651 }' 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:27.651 BaseBdev2 00:09:27.651 BaseBdev3' 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.651 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.911 [2024-11-26 21:16:45.941500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.911 [2024-11-26 21:16:45.941579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.911 [2024-11-26 21:16:45.941691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.911 [2024-11-26 21:16:45.942014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.911 [2024-11-26 21:16:45.942071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67231 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67231 ']' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67231 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67231 00:09:27.911 killing process with pid 67231 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.911 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67231' 00:09:27.912 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67231 00:09:27.912 [2024-11-26 21:16:45.989997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.912 21:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67231 00:09:28.171 [2024-11-26 21:16:46.285009] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.554 21:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.554 00:09:29.554 real 0m10.466s 00:09:29.554 user 0m16.694s 00:09:29.554 sys 0m1.768s 00:09:29.554 21:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.554 21:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.554 ************************************ 00:09:29.554 END TEST raid_state_function_test 00:09:29.554 ************************************ 00:09:29.554 21:16:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:29.554 21:16:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:29.554 21:16:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.554 21:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.554 ************************************ 00:09:29.554 START TEST raid_state_function_test_sb 00:09:29.554 ************************************ 00:09:29.554 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67852 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67852' 00:09:29.555 Process raid pid: 67852 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67852 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67852 ']' 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.555 21:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.555 [2024-11-26 21:16:47.561148] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:29.555 [2024-11-26 21:16:47.561342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.815 [2024-11-26 21:16:47.734987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.815 [2024-11-26 21:16:47.848780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.074 [2024-11-26 21:16:48.053942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.074 [2024-11-26 21:16:48.054072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.334 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.334 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:30.334 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.334 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.334 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 [2024-11-26 21:16:48.405725] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.335 [2024-11-26 21:16:48.405781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.335 [2024-11-26 21:16:48.405796] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.335 [2024-11-26 21:16:48.405806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.335 [2024-11-26 21:16:48.405812] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.335 [2024-11-26 21:16:48.405820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.335 "name": "Existed_Raid", 00:09:30.335 "uuid": "36b2b2ea-c177-497a-badb-89578b3505a0", 00:09:30.335 "strip_size_kb": 0, 00:09:30.335 "state": "configuring", 00:09:30.335 "raid_level": "raid1", 00:09:30.335 "superblock": true, 00:09:30.335 "num_base_bdevs": 3, 00:09:30.335 "num_base_bdevs_discovered": 0, 00:09:30.335 "num_base_bdevs_operational": 3, 00:09:30.335 "base_bdevs_list": [ 00:09:30.335 { 00:09:30.335 "name": "BaseBdev1", 00:09:30.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.335 "is_configured": false, 00:09:30.335 "data_offset": 0, 00:09:30.335 "data_size": 0 00:09:30.335 }, 00:09:30.335 { 00:09:30.335 "name": "BaseBdev2", 00:09:30.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.335 "is_configured": false, 00:09:30.335 "data_offset": 0, 00:09:30.335 "data_size": 0 00:09:30.335 }, 00:09:30.335 { 00:09:30.335 "name": "BaseBdev3", 00:09:30.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.335 "is_configured": false, 00:09:30.335 "data_offset": 0, 00:09:30.335 "data_size": 0 00:09:30.335 } 00:09:30.335 ] 00:09:30.335 }' 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.335 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 [2024-11-26 21:16:48.864896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.906 [2024-11-26 21:16:48.865038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 [2024-11-26 21:16:48.876859] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.906 [2024-11-26 21:16:48.876906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.906 [2024-11-26 21:16:48.876916] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.906 [2024-11-26 21:16:48.876925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.906 [2024-11-26 21:16:48.876932] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.906 [2024-11-26 21:16:48.876942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 [2024-11-26 21:16:48.924120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.906 BaseBdev1 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 [ 00:09:30.906 { 00:09:30.906 "name": "BaseBdev1", 00:09:30.906 "aliases": [ 00:09:30.906 "2b516465-2d15-412a-9de4-7fbbe6e0c3fa" 00:09:30.906 ], 00:09:30.906 "product_name": "Malloc disk", 00:09:30.906 "block_size": 512, 00:09:30.906 "num_blocks": 65536, 00:09:30.906 "uuid": "2b516465-2d15-412a-9de4-7fbbe6e0c3fa", 00:09:30.906 "assigned_rate_limits": { 00:09:30.906 "rw_ios_per_sec": 0, 00:09:30.906 "rw_mbytes_per_sec": 0, 00:09:30.906 "r_mbytes_per_sec": 0, 00:09:30.906 "w_mbytes_per_sec": 0 00:09:30.906 }, 00:09:30.906 "claimed": true, 00:09:30.906 "claim_type": "exclusive_write", 00:09:30.906 "zoned": false, 00:09:30.906 "supported_io_types": { 00:09:30.906 "read": true, 00:09:30.906 "write": true, 00:09:30.906 "unmap": true, 00:09:30.906 "flush": true, 00:09:30.906 "reset": true, 00:09:30.906 "nvme_admin": false, 00:09:30.906 "nvme_io": false, 00:09:30.906 "nvme_io_md": false, 00:09:30.906 "write_zeroes": true, 00:09:30.906 "zcopy": true, 00:09:30.906 "get_zone_info": false, 00:09:30.907 "zone_management": false, 00:09:30.907 "zone_append": false, 00:09:30.907 "compare": false, 00:09:30.907 "compare_and_write": false, 00:09:30.907 "abort": true, 00:09:30.907 "seek_hole": false, 00:09:30.907 "seek_data": false, 00:09:30.907 "copy": true, 00:09:30.907 "nvme_iov_md": false 00:09:30.907 }, 00:09:30.907 "memory_domains": [ 00:09:30.907 { 00:09:30.907 "dma_device_id": "system", 00:09:30.907 "dma_device_type": 1 00:09:30.907 }, 00:09:30.907 { 00:09:30.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.907 "dma_device_type": 2 00:09:30.907 } 00:09:30.907 ], 00:09:30.907 "driver_specific": {} 00:09:30.907 } 00:09:30.907 ] 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.907 21:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.907 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.907 "name": "Existed_Raid", 00:09:30.907 "uuid": "3d8d7e98-b8f5-47ed-a380-95d32ffe67e2", 00:09:30.907 "strip_size_kb": 0, 00:09:30.907 "state": "configuring", 00:09:30.907 "raid_level": "raid1", 00:09:30.907 "superblock": true, 00:09:30.907 "num_base_bdevs": 3, 00:09:30.907 "num_base_bdevs_discovered": 1, 00:09:30.907 "num_base_bdevs_operational": 3, 00:09:30.907 "base_bdevs_list": [ 00:09:30.907 { 00:09:30.907 "name": "BaseBdev1", 00:09:30.907 "uuid": "2b516465-2d15-412a-9de4-7fbbe6e0c3fa", 00:09:30.907 "is_configured": true, 00:09:30.907 "data_offset": 2048, 00:09:30.907 "data_size": 63488 00:09:30.907 }, 00:09:30.907 { 00:09:30.907 "name": "BaseBdev2", 00:09:30.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.907 "is_configured": false, 00:09:30.907 "data_offset": 0, 00:09:30.907 "data_size": 0 00:09:30.907 }, 00:09:30.907 { 00:09:30.907 "name": "BaseBdev3", 00:09:30.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.907 "is_configured": false, 00:09:30.907 "data_offset": 0, 00:09:30.907 "data_size": 0 00:09:30.907 } 00:09:30.907 ] 00:09:30.907 }' 00:09:30.907 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.907 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.556 [2024-11-26 21:16:49.443286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.556 [2024-11-26 21:16:49.443344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.556 [2024-11-26 21:16:49.455297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.556 [2024-11-26 21:16:49.457205] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.556 [2024-11-26 21:16:49.457282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.556 [2024-11-26 21:16:49.457312] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.556 [2024-11-26 21:16:49.457333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.556 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.556 "name": "Existed_Raid", 00:09:31.556 "uuid": "3b0178d7-372e-495c-89b0-e970db1e8b2c", 00:09:31.556 "strip_size_kb": 0, 00:09:31.556 "state": "configuring", 00:09:31.556 "raid_level": "raid1", 00:09:31.556 "superblock": true, 00:09:31.556 "num_base_bdevs": 3, 00:09:31.556 "num_base_bdevs_discovered": 1, 00:09:31.556 "num_base_bdevs_operational": 3, 00:09:31.556 "base_bdevs_list": [ 00:09:31.556 { 00:09:31.556 "name": "BaseBdev1", 00:09:31.556 "uuid": "2b516465-2d15-412a-9de4-7fbbe6e0c3fa", 00:09:31.556 "is_configured": true, 00:09:31.556 "data_offset": 2048, 00:09:31.556 "data_size": 63488 00:09:31.556 }, 00:09:31.556 { 00:09:31.556 "name": "BaseBdev2", 00:09:31.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.557 "is_configured": false, 00:09:31.557 "data_offset": 0, 00:09:31.557 "data_size": 0 00:09:31.557 }, 00:09:31.557 { 00:09:31.557 "name": "BaseBdev3", 00:09:31.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.557 "is_configured": false, 00:09:31.557 "data_offset": 0, 00:09:31.557 "data_size": 0 00:09:31.557 } 00:09:31.557 ] 00:09:31.557 }' 00:09:31.557 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.557 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.816 [2024-11-26 21:16:49.962662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.816 BaseBdev2 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.816 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.076 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.076 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.076 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.076 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.076 [ 00:09:32.076 { 00:09:32.076 "name": "BaseBdev2", 00:09:32.076 "aliases": [ 00:09:32.076 "22e4fb13-301a-4e22-8aa9-30aa39c0a533" 00:09:32.076 ], 00:09:32.076 "product_name": "Malloc disk", 00:09:32.076 "block_size": 512, 00:09:32.076 "num_blocks": 65536, 00:09:32.076 "uuid": "22e4fb13-301a-4e22-8aa9-30aa39c0a533", 00:09:32.076 "assigned_rate_limits": { 00:09:32.076 "rw_ios_per_sec": 0, 00:09:32.076 "rw_mbytes_per_sec": 0, 00:09:32.076 "r_mbytes_per_sec": 0, 00:09:32.076 "w_mbytes_per_sec": 0 00:09:32.076 }, 00:09:32.076 "claimed": true, 00:09:32.076 "claim_type": "exclusive_write", 00:09:32.076 "zoned": false, 00:09:32.076 "supported_io_types": { 00:09:32.076 "read": true, 00:09:32.076 "write": true, 00:09:32.076 "unmap": true, 00:09:32.076 "flush": true, 00:09:32.076 "reset": true, 00:09:32.076 "nvme_admin": false, 00:09:32.076 "nvme_io": false, 00:09:32.076 "nvme_io_md": false, 00:09:32.076 "write_zeroes": true, 00:09:32.076 "zcopy": true, 00:09:32.076 "get_zone_info": false, 00:09:32.076 "zone_management": false, 00:09:32.076 "zone_append": false, 00:09:32.076 "compare": false, 00:09:32.076 "compare_and_write": false, 00:09:32.076 "abort": true, 00:09:32.076 "seek_hole": false, 00:09:32.076 "seek_data": false, 00:09:32.076 "copy": true, 00:09:32.076 "nvme_iov_md": false 00:09:32.076 }, 00:09:32.076 "memory_domains": [ 00:09:32.076 { 00:09:32.076 "dma_device_id": "system", 00:09:32.076 "dma_device_type": 1 00:09:32.076 }, 00:09:32.076 { 00:09:32.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.076 "dma_device_type": 2 00:09:32.076 } 00:09:32.076 ], 00:09:32.076 "driver_specific": {} 00:09:32.076 } 00:09:32.076 ] 00:09:32.076 21:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.076 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.076 "name": "Existed_Raid", 00:09:32.076 "uuid": "3b0178d7-372e-495c-89b0-e970db1e8b2c", 00:09:32.076 "strip_size_kb": 0, 00:09:32.076 "state": "configuring", 00:09:32.076 "raid_level": "raid1", 00:09:32.076 "superblock": true, 00:09:32.076 "num_base_bdevs": 3, 00:09:32.076 "num_base_bdevs_discovered": 2, 00:09:32.076 "num_base_bdevs_operational": 3, 00:09:32.076 "base_bdevs_list": [ 00:09:32.076 { 00:09:32.076 "name": "BaseBdev1", 00:09:32.076 "uuid": "2b516465-2d15-412a-9de4-7fbbe6e0c3fa", 00:09:32.077 "is_configured": true, 00:09:32.077 "data_offset": 2048, 00:09:32.077 "data_size": 63488 00:09:32.077 }, 00:09:32.077 { 00:09:32.077 "name": "BaseBdev2", 00:09:32.077 "uuid": "22e4fb13-301a-4e22-8aa9-30aa39c0a533", 00:09:32.077 "is_configured": true, 00:09:32.077 "data_offset": 2048, 00:09:32.077 "data_size": 63488 00:09:32.077 }, 00:09:32.077 { 00:09:32.077 "name": "BaseBdev3", 00:09:32.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.077 "is_configured": false, 00:09:32.077 "data_offset": 0, 00:09:32.077 "data_size": 0 00:09:32.077 } 00:09:32.077 ] 00:09:32.077 }' 00:09:32.077 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.077 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 [2024-11-26 21:16:50.468211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.336 [2024-11-26 21:16:50.468587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.336 [2024-11-26 21:16:50.468658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.336 [2024-11-26 21:16:50.468951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:32.336 [2024-11-26 21:16:50.469167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.336 [2024-11-26 21:16:50.469209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:32.336 BaseBdev3 00:09:32.336 [2024-11-26 21:16:50.469396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.336 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.597 [ 00:09:32.597 { 00:09:32.597 "name": "BaseBdev3", 00:09:32.597 "aliases": [ 00:09:32.597 "9e9835c0-bb0c-4c7a-a662-257178e1736f" 00:09:32.597 ], 00:09:32.597 "product_name": "Malloc disk", 00:09:32.597 "block_size": 512, 00:09:32.597 "num_blocks": 65536, 00:09:32.597 "uuid": "9e9835c0-bb0c-4c7a-a662-257178e1736f", 00:09:32.597 "assigned_rate_limits": { 00:09:32.597 "rw_ios_per_sec": 0, 00:09:32.597 "rw_mbytes_per_sec": 0, 00:09:32.597 "r_mbytes_per_sec": 0, 00:09:32.597 "w_mbytes_per_sec": 0 00:09:32.597 }, 00:09:32.597 "claimed": true, 00:09:32.597 "claim_type": "exclusive_write", 00:09:32.597 "zoned": false, 00:09:32.597 "supported_io_types": { 00:09:32.597 "read": true, 00:09:32.597 "write": true, 00:09:32.597 "unmap": true, 00:09:32.597 "flush": true, 00:09:32.597 "reset": true, 00:09:32.597 "nvme_admin": false, 00:09:32.597 "nvme_io": false, 00:09:32.597 "nvme_io_md": false, 00:09:32.597 "write_zeroes": true, 00:09:32.597 "zcopy": true, 00:09:32.597 "get_zone_info": false, 00:09:32.597 "zone_management": false, 00:09:32.597 "zone_append": false, 00:09:32.597 "compare": false, 00:09:32.597 "compare_and_write": false, 00:09:32.597 "abort": true, 00:09:32.597 "seek_hole": false, 00:09:32.597 "seek_data": false, 00:09:32.597 "copy": true, 00:09:32.597 "nvme_iov_md": false 00:09:32.597 }, 00:09:32.597 "memory_domains": [ 00:09:32.597 { 00:09:32.597 "dma_device_id": "system", 00:09:32.597 "dma_device_type": 1 00:09:32.597 }, 00:09:32.597 { 00:09:32.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.597 "dma_device_type": 2 00:09:32.597 } 00:09:32.597 ], 00:09:32.597 "driver_specific": {} 00:09:32.597 } 00:09:32.597 ] 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.597 "name": "Existed_Raid", 00:09:32.597 "uuid": "3b0178d7-372e-495c-89b0-e970db1e8b2c", 00:09:32.597 "strip_size_kb": 0, 00:09:32.597 "state": "online", 00:09:32.597 "raid_level": "raid1", 00:09:32.597 "superblock": true, 00:09:32.597 "num_base_bdevs": 3, 00:09:32.597 "num_base_bdevs_discovered": 3, 00:09:32.597 "num_base_bdevs_operational": 3, 00:09:32.597 "base_bdevs_list": [ 00:09:32.597 { 00:09:32.597 "name": "BaseBdev1", 00:09:32.597 "uuid": "2b516465-2d15-412a-9de4-7fbbe6e0c3fa", 00:09:32.597 "is_configured": true, 00:09:32.597 "data_offset": 2048, 00:09:32.597 "data_size": 63488 00:09:32.597 }, 00:09:32.597 { 00:09:32.597 "name": "BaseBdev2", 00:09:32.597 "uuid": "22e4fb13-301a-4e22-8aa9-30aa39c0a533", 00:09:32.597 "is_configured": true, 00:09:32.597 "data_offset": 2048, 00:09:32.597 "data_size": 63488 00:09:32.597 }, 00:09:32.597 { 00:09:32.597 "name": "BaseBdev3", 00:09:32.597 "uuid": "9e9835c0-bb0c-4c7a-a662-257178e1736f", 00:09:32.597 "is_configured": true, 00:09:32.597 "data_offset": 2048, 00:09:32.597 "data_size": 63488 00:09:32.597 } 00:09:32.597 ] 00:09:32.597 }' 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.597 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.857 [2024-11-26 21:16:50.967828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.857 21:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.857 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.857 "name": "Existed_Raid", 00:09:32.857 "aliases": [ 00:09:32.857 "3b0178d7-372e-495c-89b0-e970db1e8b2c" 00:09:32.857 ], 00:09:32.857 "product_name": "Raid Volume", 00:09:32.857 "block_size": 512, 00:09:32.857 "num_blocks": 63488, 00:09:32.857 "uuid": "3b0178d7-372e-495c-89b0-e970db1e8b2c", 00:09:32.857 "assigned_rate_limits": { 00:09:32.857 "rw_ios_per_sec": 0, 00:09:32.857 "rw_mbytes_per_sec": 0, 00:09:32.857 "r_mbytes_per_sec": 0, 00:09:32.857 "w_mbytes_per_sec": 0 00:09:32.857 }, 00:09:32.857 "claimed": false, 00:09:32.857 "zoned": false, 00:09:32.857 "supported_io_types": { 00:09:32.857 "read": true, 00:09:32.857 "write": true, 00:09:32.857 "unmap": false, 00:09:32.857 "flush": false, 00:09:32.857 "reset": true, 00:09:32.857 "nvme_admin": false, 00:09:32.857 "nvme_io": false, 00:09:32.857 "nvme_io_md": false, 00:09:32.857 "write_zeroes": true, 00:09:32.857 "zcopy": false, 00:09:32.857 "get_zone_info": false, 00:09:32.858 "zone_management": false, 00:09:32.858 "zone_append": false, 00:09:32.858 "compare": false, 00:09:32.858 "compare_and_write": false, 00:09:32.858 "abort": false, 00:09:32.858 "seek_hole": false, 00:09:32.858 "seek_data": false, 00:09:32.858 "copy": false, 00:09:32.858 "nvme_iov_md": false 00:09:32.858 }, 00:09:32.858 "memory_domains": [ 00:09:32.858 { 00:09:32.858 "dma_device_id": "system", 00:09:32.858 "dma_device_type": 1 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.858 "dma_device_type": 2 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "dma_device_id": "system", 00:09:32.858 "dma_device_type": 1 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.858 "dma_device_type": 2 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "dma_device_id": "system", 00:09:32.858 "dma_device_type": 1 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.858 "dma_device_type": 2 00:09:32.858 } 00:09:32.858 ], 00:09:32.858 "driver_specific": { 00:09:32.858 "raid": { 00:09:32.858 "uuid": "3b0178d7-372e-495c-89b0-e970db1e8b2c", 00:09:32.858 "strip_size_kb": 0, 00:09:32.858 "state": "online", 00:09:32.858 "raid_level": "raid1", 00:09:32.858 "superblock": true, 00:09:32.858 "num_base_bdevs": 3, 00:09:32.858 "num_base_bdevs_discovered": 3, 00:09:32.858 "num_base_bdevs_operational": 3, 00:09:32.858 "base_bdevs_list": [ 00:09:32.858 { 00:09:32.858 "name": "BaseBdev1", 00:09:32.858 "uuid": "2b516465-2d15-412a-9de4-7fbbe6e0c3fa", 00:09:32.858 "is_configured": true, 00:09:32.858 "data_offset": 2048, 00:09:32.858 "data_size": 63488 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "name": "BaseBdev2", 00:09:32.858 "uuid": "22e4fb13-301a-4e22-8aa9-30aa39c0a533", 00:09:32.858 "is_configured": true, 00:09:32.858 "data_offset": 2048, 00:09:32.858 "data_size": 63488 00:09:32.858 }, 00:09:32.858 { 00:09:32.858 "name": "BaseBdev3", 00:09:32.858 "uuid": "9e9835c0-bb0c-4c7a-a662-257178e1736f", 00:09:32.858 "is_configured": true, 00:09:32.858 "data_offset": 2048, 00:09:32.858 "data_size": 63488 00:09:32.858 } 00:09:32.858 ] 00:09:32.858 } 00:09:32.858 } 00:09:32.858 }' 00:09:32.858 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.119 BaseBdev2 00:09:33.119 BaseBdev3' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.119 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.120 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.120 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.120 [2024-11-26 21:16:51.235093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.380 "name": "Existed_Raid", 00:09:33.380 "uuid": "3b0178d7-372e-495c-89b0-e970db1e8b2c", 00:09:33.380 "strip_size_kb": 0, 00:09:33.380 "state": "online", 00:09:33.380 "raid_level": "raid1", 00:09:33.380 "superblock": true, 00:09:33.380 "num_base_bdevs": 3, 00:09:33.380 "num_base_bdevs_discovered": 2, 00:09:33.380 "num_base_bdevs_operational": 2, 00:09:33.380 "base_bdevs_list": [ 00:09:33.380 { 00:09:33.380 "name": null, 00:09:33.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.380 "is_configured": false, 00:09:33.380 "data_offset": 0, 00:09:33.380 "data_size": 63488 00:09:33.380 }, 00:09:33.380 { 00:09:33.380 "name": "BaseBdev2", 00:09:33.380 "uuid": "22e4fb13-301a-4e22-8aa9-30aa39c0a533", 00:09:33.380 "is_configured": true, 00:09:33.380 "data_offset": 2048, 00:09:33.380 "data_size": 63488 00:09:33.380 }, 00:09:33.380 { 00:09:33.380 "name": "BaseBdev3", 00:09:33.380 "uuid": "9e9835c0-bb0c-4c7a-a662-257178e1736f", 00:09:33.380 "is_configured": true, 00:09:33.380 "data_offset": 2048, 00:09:33.380 "data_size": 63488 00:09:33.380 } 00:09:33.380 ] 00:09:33.380 }' 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.380 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.640 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.640 [2024-11-26 21:16:51.760266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.900 21:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.900 [2024-11-26 21:16:51.911390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.900 [2024-11-26 21:16:51.911545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.900 [2024-11-26 21:16:52.004672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.900 [2024-11-26 21:16:52.004798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.900 [2024-11-26 21:16:52.004840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:33.900 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.160 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.160 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 BaseBdev2 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 [ 00:09:34.161 { 00:09:34.161 "name": "BaseBdev2", 00:09:34.161 "aliases": [ 00:09:34.161 "8f4218d5-524e-46b3-8cbe-52ab5402dad2" 00:09:34.161 ], 00:09:34.161 "product_name": "Malloc disk", 00:09:34.161 "block_size": 512, 00:09:34.161 "num_blocks": 65536, 00:09:34.161 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:34.161 "assigned_rate_limits": { 00:09:34.161 "rw_ios_per_sec": 0, 00:09:34.161 "rw_mbytes_per_sec": 0, 00:09:34.161 "r_mbytes_per_sec": 0, 00:09:34.161 "w_mbytes_per_sec": 0 00:09:34.161 }, 00:09:34.161 "claimed": false, 00:09:34.161 "zoned": false, 00:09:34.161 "supported_io_types": { 00:09:34.161 "read": true, 00:09:34.161 "write": true, 00:09:34.161 "unmap": true, 00:09:34.161 "flush": true, 00:09:34.161 "reset": true, 00:09:34.161 "nvme_admin": false, 00:09:34.161 "nvme_io": false, 00:09:34.161 "nvme_io_md": false, 00:09:34.161 "write_zeroes": true, 00:09:34.161 "zcopy": true, 00:09:34.161 "get_zone_info": false, 00:09:34.161 "zone_management": false, 00:09:34.161 "zone_append": false, 00:09:34.161 "compare": false, 00:09:34.161 "compare_and_write": false, 00:09:34.161 "abort": true, 00:09:34.161 "seek_hole": false, 00:09:34.161 "seek_data": false, 00:09:34.161 "copy": true, 00:09:34.161 "nvme_iov_md": false 00:09:34.161 }, 00:09:34.161 "memory_domains": [ 00:09:34.161 { 00:09:34.161 "dma_device_id": "system", 00:09:34.161 "dma_device_type": 1 00:09:34.161 }, 00:09:34.161 { 00:09:34.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.161 "dma_device_type": 2 00:09:34.161 } 00:09:34.161 ], 00:09:34.161 "driver_specific": {} 00:09:34.161 } 00:09:34.161 ] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 BaseBdev3 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 [ 00:09:34.161 { 00:09:34.161 "name": "BaseBdev3", 00:09:34.161 "aliases": [ 00:09:34.161 "ad898ebe-e90d-4c79-a676-6f26c61be0ed" 00:09:34.161 ], 00:09:34.161 "product_name": "Malloc disk", 00:09:34.161 "block_size": 512, 00:09:34.161 "num_blocks": 65536, 00:09:34.161 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:34.161 "assigned_rate_limits": { 00:09:34.161 "rw_ios_per_sec": 0, 00:09:34.161 "rw_mbytes_per_sec": 0, 00:09:34.161 "r_mbytes_per_sec": 0, 00:09:34.161 "w_mbytes_per_sec": 0 00:09:34.161 }, 00:09:34.161 "claimed": false, 00:09:34.161 "zoned": false, 00:09:34.161 "supported_io_types": { 00:09:34.161 "read": true, 00:09:34.161 "write": true, 00:09:34.161 "unmap": true, 00:09:34.161 "flush": true, 00:09:34.161 "reset": true, 00:09:34.161 "nvme_admin": false, 00:09:34.161 "nvme_io": false, 00:09:34.161 "nvme_io_md": false, 00:09:34.161 "write_zeroes": true, 00:09:34.161 "zcopy": true, 00:09:34.161 "get_zone_info": false, 00:09:34.161 "zone_management": false, 00:09:34.161 "zone_append": false, 00:09:34.161 "compare": false, 00:09:34.161 "compare_and_write": false, 00:09:34.161 "abort": true, 00:09:34.161 "seek_hole": false, 00:09:34.161 "seek_data": false, 00:09:34.161 "copy": true, 00:09:34.161 "nvme_iov_md": false 00:09:34.161 }, 00:09:34.161 "memory_domains": [ 00:09:34.161 { 00:09:34.161 "dma_device_id": "system", 00:09:34.161 "dma_device_type": 1 00:09:34.161 }, 00:09:34.161 { 00:09:34.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.161 "dma_device_type": 2 00:09:34.161 } 00:09:34.161 ], 00:09:34.161 "driver_specific": {} 00:09:34.161 } 00:09:34.161 ] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.161 [2024-11-26 21:16:52.215949] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.161 [2024-11-26 21:16:52.216086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.161 [2024-11-26 21:16:52.216125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.161 [2024-11-26 21:16:52.217837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.161 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.162 "name": "Existed_Raid", 00:09:34.162 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:34.162 "strip_size_kb": 0, 00:09:34.162 "state": "configuring", 00:09:34.162 "raid_level": "raid1", 00:09:34.162 "superblock": true, 00:09:34.162 "num_base_bdevs": 3, 00:09:34.162 "num_base_bdevs_discovered": 2, 00:09:34.162 "num_base_bdevs_operational": 3, 00:09:34.162 "base_bdevs_list": [ 00:09:34.162 { 00:09:34.162 "name": "BaseBdev1", 00:09:34.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.162 "is_configured": false, 00:09:34.162 "data_offset": 0, 00:09:34.162 "data_size": 0 00:09:34.162 }, 00:09:34.162 { 00:09:34.162 "name": "BaseBdev2", 00:09:34.162 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:34.162 "is_configured": true, 00:09:34.162 "data_offset": 2048, 00:09:34.162 "data_size": 63488 00:09:34.162 }, 00:09:34.162 { 00:09:34.162 "name": "BaseBdev3", 00:09:34.162 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:34.162 "is_configured": true, 00:09:34.162 "data_offset": 2048, 00:09:34.162 "data_size": 63488 00:09:34.162 } 00:09:34.162 ] 00:09:34.162 }' 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.162 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.732 [2024-11-26 21:16:52.655238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.732 "name": "Existed_Raid", 00:09:34.732 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:34.732 "strip_size_kb": 0, 00:09:34.732 "state": "configuring", 00:09:34.732 "raid_level": "raid1", 00:09:34.732 "superblock": true, 00:09:34.732 "num_base_bdevs": 3, 00:09:34.732 "num_base_bdevs_discovered": 1, 00:09:34.732 "num_base_bdevs_operational": 3, 00:09:34.732 "base_bdevs_list": [ 00:09:34.732 { 00:09:34.732 "name": "BaseBdev1", 00:09:34.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.732 "is_configured": false, 00:09:34.732 "data_offset": 0, 00:09:34.732 "data_size": 0 00:09:34.732 }, 00:09:34.732 { 00:09:34.732 "name": null, 00:09:34.732 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:34.732 "is_configured": false, 00:09:34.732 "data_offset": 0, 00:09:34.732 "data_size": 63488 00:09:34.732 }, 00:09:34.732 { 00:09:34.732 "name": "BaseBdev3", 00:09:34.732 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:34.732 "is_configured": true, 00:09:34.732 "data_offset": 2048, 00:09:34.732 "data_size": 63488 00:09:34.732 } 00:09:34.732 ] 00:09:34.732 }' 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.732 21:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.992 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.253 [2024-11-26 21:16:53.171154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.253 BaseBdev1 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.253 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.253 [ 00:09:35.253 { 00:09:35.253 "name": "BaseBdev1", 00:09:35.253 "aliases": [ 00:09:35.253 "51b10a02-cc35-495b-9aa5-48055a810768" 00:09:35.253 ], 00:09:35.253 "product_name": "Malloc disk", 00:09:35.253 "block_size": 512, 00:09:35.253 "num_blocks": 65536, 00:09:35.253 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:35.253 "assigned_rate_limits": { 00:09:35.253 "rw_ios_per_sec": 0, 00:09:35.253 "rw_mbytes_per_sec": 0, 00:09:35.253 "r_mbytes_per_sec": 0, 00:09:35.253 "w_mbytes_per_sec": 0 00:09:35.253 }, 00:09:35.253 "claimed": true, 00:09:35.253 "claim_type": "exclusive_write", 00:09:35.253 "zoned": false, 00:09:35.253 "supported_io_types": { 00:09:35.253 "read": true, 00:09:35.253 "write": true, 00:09:35.253 "unmap": true, 00:09:35.253 "flush": true, 00:09:35.253 "reset": true, 00:09:35.253 "nvme_admin": false, 00:09:35.253 "nvme_io": false, 00:09:35.253 "nvme_io_md": false, 00:09:35.253 "write_zeroes": true, 00:09:35.253 "zcopy": true, 00:09:35.253 "get_zone_info": false, 00:09:35.253 "zone_management": false, 00:09:35.253 "zone_append": false, 00:09:35.253 "compare": false, 00:09:35.253 "compare_and_write": false, 00:09:35.253 "abort": true, 00:09:35.253 "seek_hole": false, 00:09:35.254 "seek_data": false, 00:09:35.254 "copy": true, 00:09:35.254 "nvme_iov_md": false 00:09:35.254 }, 00:09:35.254 "memory_domains": [ 00:09:35.254 { 00:09:35.254 "dma_device_id": "system", 00:09:35.254 "dma_device_type": 1 00:09:35.254 }, 00:09:35.254 { 00:09:35.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.254 "dma_device_type": 2 00:09:35.254 } 00:09:35.254 ], 00:09:35.254 "driver_specific": {} 00:09:35.254 } 00:09:35.254 ] 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.254 "name": "Existed_Raid", 00:09:35.254 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:35.254 "strip_size_kb": 0, 00:09:35.254 "state": "configuring", 00:09:35.254 "raid_level": "raid1", 00:09:35.254 "superblock": true, 00:09:35.254 "num_base_bdevs": 3, 00:09:35.254 "num_base_bdevs_discovered": 2, 00:09:35.254 "num_base_bdevs_operational": 3, 00:09:35.254 "base_bdevs_list": [ 00:09:35.254 { 00:09:35.254 "name": "BaseBdev1", 00:09:35.254 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:35.254 "is_configured": true, 00:09:35.254 "data_offset": 2048, 00:09:35.254 "data_size": 63488 00:09:35.254 }, 00:09:35.254 { 00:09:35.254 "name": null, 00:09:35.254 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:35.254 "is_configured": false, 00:09:35.254 "data_offset": 0, 00:09:35.254 "data_size": 63488 00:09:35.254 }, 00:09:35.254 { 00:09:35.254 "name": "BaseBdev3", 00:09:35.254 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:35.254 "is_configured": true, 00:09:35.254 "data_offset": 2048, 00:09:35.254 "data_size": 63488 00:09:35.254 } 00:09:35.254 ] 00:09:35.254 }' 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.254 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.513 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.513 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.513 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.513 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.773 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.774 [2024-11-26 21:16:53.682384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.774 "name": "Existed_Raid", 00:09:35.774 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:35.774 "strip_size_kb": 0, 00:09:35.774 "state": "configuring", 00:09:35.774 "raid_level": "raid1", 00:09:35.774 "superblock": true, 00:09:35.774 "num_base_bdevs": 3, 00:09:35.774 "num_base_bdevs_discovered": 1, 00:09:35.774 "num_base_bdevs_operational": 3, 00:09:35.774 "base_bdevs_list": [ 00:09:35.774 { 00:09:35.774 "name": "BaseBdev1", 00:09:35.774 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:35.774 "is_configured": true, 00:09:35.774 "data_offset": 2048, 00:09:35.774 "data_size": 63488 00:09:35.774 }, 00:09:35.774 { 00:09:35.774 "name": null, 00:09:35.774 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:35.774 "is_configured": false, 00:09:35.774 "data_offset": 0, 00:09:35.774 "data_size": 63488 00:09:35.774 }, 00:09:35.774 { 00:09:35.774 "name": null, 00:09:35.774 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:35.774 "is_configured": false, 00:09:35.774 "data_offset": 0, 00:09:35.774 "data_size": 63488 00:09:35.774 } 00:09:35.774 ] 00:09:35.774 }' 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.774 21:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.034 [2024-11-26 21:16:54.129687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.034 "name": "Existed_Raid", 00:09:36.034 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:36.034 "strip_size_kb": 0, 00:09:36.034 "state": "configuring", 00:09:36.034 "raid_level": "raid1", 00:09:36.034 "superblock": true, 00:09:36.034 "num_base_bdevs": 3, 00:09:36.034 "num_base_bdevs_discovered": 2, 00:09:36.034 "num_base_bdevs_operational": 3, 00:09:36.034 "base_bdevs_list": [ 00:09:36.034 { 00:09:36.034 "name": "BaseBdev1", 00:09:36.034 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:36.034 "is_configured": true, 00:09:36.034 "data_offset": 2048, 00:09:36.034 "data_size": 63488 00:09:36.034 }, 00:09:36.034 { 00:09:36.034 "name": null, 00:09:36.034 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:36.034 "is_configured": false, 00:09:36.034 "data_offset": 0, 00:09:36.034 "data_size": 63488 00:09:36.034 }, 00:09:36.034 { 00:09:36.034 "name": "BaseBdev3", 00:09:36.034 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:36.034 "is_configured": true, 00:09:36.034 "data_offset": 2048, 00:09:36.034 "data_size": 63488 00:09:36.034 } 00:09:36.034 ] 00:09:36.034 }' 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.034 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.602 [2024-11-26 21:16:54.636860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.602 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.862 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.862 "name": "Existed_Raid", 00:09:36.862 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:36.862 "strip_size_kb": 0, 00:09:36.862 "state": "configuring", 00:09:36.862 "raid_level": "raid1", 00:09:36.862 "superblock": true, 00:09:36.862 "num_base_bdevs": 3, 00:09:36.862 "num_base_bdevs_discovered": 1, 00:09:36.862 "num_base_bdevs_operational": 3, 00:09:36.862 "base_bdevs_list": [ 00:09:36.862 { 00:09:36.862 "name": null, 00:09:36.862 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:36.862 "is_configured": false, 00:09:36.862 "data_offset": 0, 00:09:36.862 "data_size": 63488 00:09:36.862 }, 00:09:36.862 { 00:09:36.862 "name": null, 00:09:36.862 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:36.862 "is_configured": false, 00:09:36.862 "data_offset": 0, 00:09:36.862 "data_size": 63488 00:09:36.862 }, 00:09:36.862 { 00:09:36.862 "name": "BaseBdev3", 00:09:36.862 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:36.862 "is_configured": true, 00:09:36.862 "data_offset": 2048, 00:09:36.862 "data_size": 63488 00:09:36.862 } 00:09:36.862 ] 00:09:36.862 }' 00:09:36.862 21:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.862 21:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.122 [2024-11-26 21:16:55.220528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.122 "name": "Existed_Raid", 00:09:37.122 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:37.122 "strip_size_kb": 0, 00:09:37.122 "state": "configuring", 00:09:37.122 "raid_level": "raid1", 00:09:37.122 "superblock": true, 00:09:37.122 "num_base_bdevs": 3, 00:09:37.122 "num_base_bdevs_discovered": 2, 00:09:37.122 "num_base_bdevs_operational": 3, 00:09:37.122 "base_bdevs_list": [ 00:09:37.122 { 00:09:37.122 "name": null, 00:09:37.122 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:37.122 "is_configured": false, 00:09:37.122 "data_offset": 0, 00:09:37.122 "data_size": 63488 00:09:37.122 }, 00:09:37.122 { 00:09:37.122 "name": "BaseBdev2", 00:09:37.122 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:37.122 "is_configured": true, 00:09:37.122 "data_offset": 2048, 00:09:37.122 "data_size": 63488 00:09:37.122 }, 00:09:37.122 { 00:09:37.122 "name": "BaseBdev3", 00:09:37.122 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:37.122 "is_configured": true, 00:09:37.122 "data_offset": 2048, 00:09:37.122 "data_size": 63488 00:09:37.122 } 00:09:37.122 ] 00:09:37.122 }' 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.122 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 51b10a02-cc35-495b-9aa5-48055a810768 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.705 [2024-11-26 21:16:55.799044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:37.705 [2024-11-26 21:16:55.799364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:37.705 [2024-11-26 21:16:55.799400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.705 [2024-11-26 21:16:55.799710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:37.705 [2024-11-26 21:16:55.799896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:37.705 [2024-11-26 21:16:55.799940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:37.705 NewBaseBdev 00:09:37.705 [2024-11-26 21:16:55.800130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.705 [ 00:09:37.705 { 00:09:37.705 "name": "NewBaseBdev", 00:09:37.705 "aliases": [ 00:09:37.705 "51b10a02-cc35-495b-9aa5-48055a810768" 00:09:37.705 ], 00:09:37.705 "product_name": "Malloc disk", 00:09:37.705 "block_size": 512, 00:09:37.705 "num_blocks": 65536, 00:09:37.705 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:37.705 "assigned_rate_limits": { 00:09:37.705 "rw_ios_per_sec": 0, 00:09:37.705 "rw_mbytes_per_sec": 0, 00:09:37.705 "r_mbytes_per_sec": 0, 00:09:37.705 "w_mbytes_per_sec": 0 00:09:37.705 }, 00:09:37.705 "claimed": true, 00:09:37.705 "claim_type": "exclusive_write", 00:09:37.705 "zoned": false, 00:09:37.705 "supported_io_types": { 00:09:37.705 "read": true, 00:09:37.705 "write": true, 00:09:37.705 "unmap": true, 00:09:37.705 "flush": true, 00:09:37.705 "reset": true, 00:09:37.705 "nvme_admin": false, 00:09:37.705 "nvme_io": false, 00:09:37.705 "nvme_io_md": false, 00:09:37.705 "write_zeroes": true, 00:09:37.705 "zcopy": true, 00:09:37.705 "get_zone_info": false, 00:09:37.705 "zone_management": false, 00:09:37.705 "zone_append": false, 00:09:37.705 "compare": false, 00:09:37.705 "compare_and_write": false, 00:09:37.705 "abort": true, 00:09:37.705 "seek_hole": false, 00:09:37.705 "seek_data": false, 00:09:37.705 "copy": true, 00:09:37.705 "nvme_iov_md": false 00:09:37.705 }, 00:09:37.705 "memory_domains": [ 00:09:37.705 { 00:09:37.705 "dma_device_id": "system", 00:09:37.705 "dma_device_type": 1 00:09:37.705 }, 00:09:37.705 { 00:09:37.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.705 "dma_device_type": 2 00:09:37.705 } 00:09:37.705 ], 00:09:37.705 "driver_specific": {} 00:09:37.705 } 00:09:37.705 ] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.705 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.980 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.980 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.980 "name": "Existed_Raid", 00:09:37.980 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:37.980 "strip_size_kb": 0, 00:09:37.980 "state": "online", 00:09:37.980 "raid_level": "raid1", 00:09:37.980 "superblock": true, 00:09:37.980 "num_base_bdevs": 3, 00:09:37.980 "num_base_bdevs_discovered": 3, 00:09:37.980 "num_base_bdevs_operational": 3, 00:09:37.981 "base_bdevs_list": [ 00:09:37.981 { 00:09:37.981 "name": "NewBaseBdev", 00:09:37.981 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:37.981 "is_configured": true, 00:09:37.981 "data_offset": 2048, 00:09:37.981 "data_size": 63488 00:09:37.981 }, 00:09:37.981 { 00:09:37.981 "name": "BaseBdev2", 00:09:37.981 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:37.981 "is_configured": true, 00:09:37.981 "data_offset": 2048, 00:09:37.981 "data_size": 63488 00:09:37.981 }, 00:09:37.981 { 00:09:37.981 "name": "BaseBdev3", 00:09:37.981 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:37.981 "is_configured": true, 00:09:37.981 "data_offset": 2048, 00:09:37.981 "data_size": 63488 00:09:37.981 } 00:09:37.981 ] 00:09:37.981 }' 00:09:37.981 21:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.981 21:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.241 [2024-11-26 21:16:56.254565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.241 "name": "Existed_Raid", 00:09:38.241 "aliases": [ 00:09:38.241 "a05e7497-194c-4e80-8cf4-a20d6d555e0c" 00:09:38.241 ], 00:09:38.241 "product_name": "Raid Volume", 00:09:38.241 "block_size": 512, 00:09:38.241 "num_blocks": 63488, 00:09:38.241 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:38.241 "assigned_rate_limits": { 00:09:38.241 "rw_ios_per_sec": 0, 00:09:38.241 "rw_mbytes_per_sec": 0, 00:09:38.241 "r_mbytes_per_sec": 0, 00:09:38.241 "w_mbytes_per_sec": 0 00:09:38.241 }, 00:09:38.241 "claimed": false, 00:09:38.241 "zoned": false, 00:09:38.241 "supported_io_types": { 00:09:38.241 "read": true, 00:09:38.241 "write": true, 00:09:38.241 "unmap": false, 00:09:38.241 "flush": false, 00:09:38.241 "reset": true, 00:09:38.241 "nvme_admin": false, 00:09:38.241 "nvme_io": false, 00:09:38.241 "nvme_io_md": false, 00:09:38.241 "write_zeroes": true, 00:09:38.241 "zcopy": false, 00:09:38.241 "get_zone_info": false, 00:09:38.241 "zone_management": false, 00:09:38.241 "zone_append": false, 00:09:38.241 "compare": false, 00:09:38.241 "compare_and_write": false, 00:09:38.241 "abort": false, 00:09:38.241 "seek_hole": false, 00:09:38.241 "seek_data": false, 00:09:38.241 "copy": false, 00:09:38.241 "nvme_iov_md": false 00:09:38.241 }, 00:09:38.241 "memory_domains": [ 00:09:38.241 { 00:09:38.241 "dma_device_id": "system", 00:09:38.241 "dma_device_type": 1 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.241 "dma_device_type": 2 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "dma_device_id": "system", 00:09:38.241 "dma_device_type": 1 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.241 "dma_device_type": 2 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "dma_device_id": "system", 00:09:38.241 "dma_device_type": 1 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.241 "dma_device_type": 2 00:09:38.241 } 00:09:38.241 ], 00:09:38.241 "driver_specific": { 00:09:38.241 "raid": { 00:09:38.241 "uuid": "a05e7497-194c-4e80-8cf4-a20d6d555e0c", 00:09:38.241 "strip_size_kb": 0, 00:09:38.241 "state": "online", 00:09:38.241 "raid_level": "raid1", 00:09:38.241 "superblock": true, 00:09:38.241 "num_base_bdevs": 3, 00:09:38.241 "num_base_bdevs_discovered": 3, 00:09:38.241 "num_base_bdevs_operational": 3, 00:09:38.241 "base_bdevs_list": [ 00:09:38.241 { 00:09:38.241 "name": "NewBaseBdev", 00:09:38.241 "uuid": "51b10a02-cc35-495b-9aa5-48055a810768", 00:09:38.241 "is_configured": true, 00:09:38.241 "data_offset": 2048, 00:09:38.241 "data_size": 63488 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "name": "BaseBdev2", 00:09:38.241 "uuid": "8f4218d5-524e-46b3-8cbe-52ab5402dad2", 00:09:38.241 "is_configured": true, 00:09:38.241 "data_offset": 2048, 00:09:38.241 "data_size": 63488 00:09:38.241 }, 00:09:38.241 { 00:09:38.241 "name": "BaseBdev3", 00:09:38.241 "uuid": "ad898ebe-e90d-4c79-a676-6f26c61be0ed", 00:09:38.241 "is_configured": true, 00:09:38.241 "data_offset": 2048, 00:09:38.241 "data_size": 63488 00:09:38.241 } 00:09:38.241 ] 00:09:38.241 } 00:09:38.241 } 00:09:38.241 }' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:38.241 BaseBdev2 00:09:38.241 BaseBdev3' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.241 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.502 [2024-11-26 21:16:56.505889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.502 [2024-11-26 21:16:56.505925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.502 [2024-11-26 21:16:56.506035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.502 [2024-11-26 21:16:56.506320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.502 [2024-11-26 21:16:56.506337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67852 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67852 ']' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67852 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67852 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67852' 00:09:38.502 killing process with pid 67852 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67852 00:09:38.502 [2024-11-26 21:16:56.557135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.502 21:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67852 00:09:38.762 [2024-11-26 21:16:56.850885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.145 21:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.145 00:09:40.145 real 0m10.467s 00:09:40.145 user 0m16.606s 00:09:40.145 sys 0m1.896s 00:09:40.145 21:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.145 ************************************ 00:09:40.145 END TEST raid_state_function_test_sb 00:09:40.145 ************************************ 00:09:40.145 21:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.145 21:16:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:40.145 21:16:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:40.145 21:16:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.145 21:16:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.145 ************************************ 00:09:40.145 START TEST raid_superblock_test 00:09:40.145 ************************************ 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68472 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68472 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68472 ']' 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.145 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.145 [2024-11-26 21:16:58.095644] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:40.145 [2024-11-26 21:16:58.095845] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68472 ] 00:09:40.145 [2024-11-26 21:16:58.270109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.405 [2024-11-26 21:16:58.383205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.664 [2024-11-26 21:16:58.580430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.664 [2024-11-26 21:16:58.580548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.924 malloc1 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.924 [2024-11-26 21:16:58.967517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.924 [2024-11-26 21:16:58.967581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.924 [2024-11-26 21:16:58.967603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:40.924 [2024-11-26 21:16:58.967612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.924 [2024-11-26 21:16:58.969691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.924 [2024-11-26 21:16:58.969774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.924 pt1 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.924 21:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.924 malloc2 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.924 [2024-11-26 21:16:59.022801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.924 [2024-11-26 21:16:59.022942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.924 [2024-11-26 21:16:59.023009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:40.924 [2024-11-26 21:16:59.023049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.924 [2024-11-26 21:16:59.025166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.924 [2024-11-26 21:16:59.025241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.924 pt2 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.924 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.185 malloc3 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.185 [2024-11-26 21:16:59.092830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.185 [2024-11-26 21:16:59.092965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.185 [2024-11-26 21:16:59.093004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:41.185 [2024-11-26 21:16:59.093033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.185 [2024-11-26 21:16:59.095122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.185 [2024-11-26 21:16:59.095195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.185 pt3 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.185 [2024-11-26 21:16:59.104867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.185 [2024-11-26 21:16:59.106641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.185 [2024-11-26 21:16:59.106709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.185 [2024-11-26 21:16:59.106872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:41.185 [2024-11-26 21:16:59.106890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.185 [2024-11-26 21:16:59.107138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.185 [2024-11-26 21:16:59.107309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:41.185 [2024-11-26 21:16:59.107328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:41.185 [2024-11-26 21:16:59.107484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.185 "name": "raid_bdev1", 00:09:41.185 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:41.185 "strip_size_kb": 0, 00:09:41.185 "state": "online", 00:09:41.185 "raid_level": "raid1", 00:09:41.185 "superblock": true, 00:09:41.185 "num_base_bdevs": 3, 00:09:41.185 "num_base_bdevs_discovered": 3, 00:09:41.185 "num_base_bdevs_operational": 3, 00:09:41.185 "base_bdevs_list": [ 00:09:41.185 { 00:09:41.185 "name": "pt1", 00:09:41.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.185 "is_configured": true, 00:09:41.185 "data_offset": 2048, 00:09:41.185 "data_size": 63488 00:09:41.185 }, 00:09:41.185 { 00:09:41.185 "name": "pt2", 00:09:41.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.185 "is_configured": true, 00:09:41.185 "data_offset": 2048, 00:09:41.185 "data_size": 63488 00:09:41.185 }, 00:09:41.185 { 00:09:41.185 "name": "pt3", 00:09:41.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.185 "is_configured": true, 00:09:41.185 "data_offset": 2048, 00:09:41.185 "data_size": 63488 00:09:41.185 } 00:09:41.185 ] 00:09:41.185 }' 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.185 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.445 [2024-11-26 21:16:59.524444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.445 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.445 "name": "raid_bdev1", 00:09:41.445 "aliases": [ 00:09:41.445 "3289dad7-7c43-49f3-90fe-5ec04e352d75" 00:09:41.445 ], 00:09:41.445 "product_name": "Raid Volume", 00:09:41.445 "block_size": 512, 00:09:41.445 "num_blocks": 63488, 00:09:41.445 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:41.445 "assigned_rate_limits": { 00:09:41.445 "rw_ios_per_sec": 0, 00:09:41.445 "rw_mbytes_per_sec": 0, 00:09:41.445 "r_mbytes_per_sec": 0, 00:09:41.445 "w_mbytes_per_sec": 0 00:09:41.445 }, 00:09:41.445 "claimed": false, 00:09:41.445 "zoned": false, 00:09:41.445 "supported_io_types": { 00:09:41.445 "read": true, 00:09:41.445 "write": true, 00:09:41.445 "unmap": false, 00:09:41.445 "flush": false, 00:09:41.445 "reset": true, 00:09:41.445 "nvme_admin": false, 00:09:41.445 "nvme_io": false, 00:09:41.445 "nvme_io_md": false, 00:09:41.445 "write_zeroes": true, 00:09:41.445 "zcopy": false, 00:09:41.445 "get_zone_info": false, 00:09:41.445 "zone_management": false, 00:09:41.445 "zone_append": false, 00:09:41.445 "compare": false, 00:09:41.445 "compare_and_write": false, 00:09:41.445 "abort": false, 00:09:41.445 "seek_hole": false, 00:09:41.445 "seek_data": false, 00:09:41.445 "copy": false, 00:09:41.445 "nvme_iov_md": false 00:09:41.445 }, 00:09:41.445 "memory_domains": [ 00:09:41.445 { 00:09:41.445 "dma_device_id": "system", 00:09:41.445 "dma_device_type": 1 00:09:41.445 }, 00:09:41.445 { 00:09:41.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.445 "dma_device_type": 2 00:09:41.445 }, 00:09:41.445 { 00:09:41.445 "dma_device_id": "system", 00:09:41.445 "dma_device_type": 1 00:09:41.445 }, 00:09:41.445 { 00:09:41.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.445 "dma_device_type": 2 00:09:41.445 }, 00:09:41.445 { 00:09:41.445 "dma_device_id": "system", 00:09:41.445 "dma_device_type": 1 00:09:41.445 }, 00:09:41.445 { 00:09:41.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.445 "dma_device_type": 2 00:09:41.445 } 00:09:41.445 ], 00:09:41.445 "driver_specific": { 00:09:41.445 "raid": { 00:09:41.445 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:41.445 "strip_size_kb": 0, 00:09:41.445 "state": "online", 00:09:41.445 "raid_level": "raid1", 00:09:41.445 "superblock": true, 00:09:41.445 "num_base_bdevs": 3, 00:09:41.445 "num_base_bdevs_discovered": 3, 00:09:41.445 "num_base_bdevs_operational": 3, 00:09:41.445 "base_bdevs_list": [ 00:09:41.445 { 00:09:41.445 "name": "pt1", 00:09:41.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.446 "is_configured": true, 00:09:41.446 "data_offset": 2048, 00:09:41.446 "data_size": 63488 00:09:41.446 }, 00:09:41.446 { 00:09:41.446 "name": "pt2", 00:09:41.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.446 "is_configured": true, 00:09:41.446 "data_offset": 2048, 00:09:41.446 "data_size": 63488 00:09:41.446 }, 00:09:41.446 { 00:09:41.446 "name": "pt3", 00:09:41.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.446 "is_configured": true, 00:09:41.446 "data_offset": 2048, 00:09:41.446 "data_size": 63488 00:09:41.446 } 00:09:41.446 ] 00:09:41.446 } 00:09:41.446 } 00:09:41.446 }' 00:09:41.446 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.446 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:41.446 pt2 00:09:41.446 pt3' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:41.706 [2024-11-26 21:16:59.780000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3289dad7-7c43-49f3-90fe-5ec04e352d75 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3289dad7-7c43-49f3-90fe-5ec04e352d75 ']' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 [2024-11-26 21:16:59.827641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.706 [2024-11-26 21:16:59.827721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.706 [2024-11-26 21:16:59.827812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.706 [2024-11-26 21:16:59.827905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.706 [2024-11-26 21:16:59.827943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:41.706 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.966 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:41.966 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:41.966 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 [2024-11-26 21:16:59.983461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:41.967 [2024-11-26 21:16:59.985363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:41.967 [2024-11-26 21:16:59.985423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:41.967 [2024-11-26 21:16:59.985475] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:41.967 [2024-11-26 21:16:59.985530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:41.967 [2024-11-26 21:16:59.985549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:41.967 [2024-11-26 21:16:59.985565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.967 [2024-11-26 21:16:59.985573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:41.967 request: 00:09:41.967 { 00:09:41.967 "name": "raid_bdev1", 00:09:41.967 "raid_level": "raid1", 00:09:41.967 "base_bdevs": [ 00:09:41.967 "malloc1", 00:09:41.967 "malloc2", 00:09:41.967 "malloc3" 00:09:41.967 ], 00:09:41.967 "superblock": false, 00:09:41.967 "method": "bdev_raid_create", 00:09:41.967 "req_id": 1 00:09:41.967 } 00:09:41.967 Got JSON-RPC error response 00:09:41.967 response: 00:09:41.967 { 00:09:41.967 "code": -17, 00:09:41.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:41.967 } 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.967 [2024-11-26 21:17:00.031294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.967 [2024-11-26 21:17:00.031415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.967 [2024-11-26 21:17:00.031452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:41.967 [2024-11-26 21:17:00.031481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.967 [2024-11-26 21:17:00.033599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.967 [2024-11-26 21:17:00.033673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.967 [2024-11-26 21:17:00.033772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:41.967 [2024-11-26 21:17:00.033838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.967 pt1 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.967 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.968 "name": "raid_bdev1", 00:09:41.968 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:41.968 "strip_size_kb": 0, 00:09:41.968 "state": "configuring", 00:09:41.968 "raid_level": "raid1", 00:09:41.968 "superblock": true, 00:09:41.968 "num_base_bdevs": 3, 00:09:41.968 "num_base_bdevs_discovered": 1, 00:09:41.968 "num_base_bdevs_operational": 3, 00:09:41.968 "base_bdevs_list": [ 00:09:41.968 { 00:09:41.968 "name": "pt1", 00:09:41.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.968 "is_configured": true, 00:09:41.968 "data_offset": 2048, 00:09:41.968 "data_size": 63488 00:09:41.968 }, 00:09:41.968 { 00:09:41.968 "name": null, 00:09:41.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.968 "is_configured": false, 00:09:41.968 "data_offset": 2048, 00:09:41.968 "data_size": 63488 00:09:41.968 }, 00:09:41.968 { 00:09:41.968 "name": null, 00:09:41.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.968 "is_configured": false, 00:09:41.968 "data_offset": 2048, 00:09:41.968 "data_size": 63488 00:09:41.968 } 00:09:41.968 ] 00:09:41.968 }' 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.968 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.539 [2024-11-26 21:17:00.462598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.539 [2024-11-26 21:17:00.462747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.539 [2024-11-26 21:17:00.462774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:42.539 [2024-11-26 21:17:00.462783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.539 [2024-11-26 21:17:00.463258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.539 [2024-11-26 21:17:00.463278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.539 [2024-11-26 21:17:00.463365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.539 [2024-11-26 21:17:00.463387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.539 pt2 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.539 [2024-11-26 21:17:00.474571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.539 "name": "raid_bdev1", 00:09:42.539 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:42.539 "strip_size_kb": 0, 00:09:42.539 "state": "configuring", 00:09:42.539 "raid_level": "raid1", 00:09:42.539 "superblock": true, 00:09:42.539 "num_base_bdevs": 3, 00:09:42.539 "num_base_bdevs_discovered": 1, 00:09:42.539 "num_base_bdevs_operational": 3, 00:09:42.539 "base_bdevs_list": [ 00:09:42.539 { 00:09:42.539 "name": "pt1", 00:09:42.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.539 "is_configured": true, 00:09:42.539 "data_offset": 2048, 00:09:42.539 "data_size": 63488 00:09:42.539 }, 00:09:42.539 { 00:09:42.539 "name": null, 00:09:42.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.539 "is_configured": false, 00:09:42.539 "data_offset": 0, 00:09:42.539 "data_size": 63488 00:09:42.539 }, 00:09:42.539 { 00:09:42.539 "name": null, 00:09:42.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.539 "is_configured": false, 00:09:42.539 "data_offset": 2048, 00:09:42.539 "data_size": 63488 00:09:42.539 } 00:09:42.539 ] 00:09:42.539 }' 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.539 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.799 [2024-11-26 21:17:00.909852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.799 [2024-11-26 21:17:00.910020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.799 [2024-11-26 21:17:00.910061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:42.799 [2024-11-26 21:17:00.910093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.799 [2024-11-26 21:17:00.910595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.799 [2024-11-26 21:17:00.910659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.799 [2024-11-26 21:17:00.910791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.799 [2024-11-26 21:17:00.910863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.799 pt2 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.799 [2024-11-26 21:17:00.921842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:42.799 [2024-11-26 21:17:00.921985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.799 [2024-11-26 21:17:00.922021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:42.799 [2024-11-26 21:17:00.922049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.799 [2024-11-26 21:17:00.922550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.799 [2024-11-26 21:17:00.922618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:42.799 [2024-11-26 21:17:00.922726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:42.799 [2024-11-26 21:17:00.922794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:42.799 [2024-11-26 21:17:00.922946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:42.799 [2024-11-26 21:17:00.922999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.799 [2024-11-26 21:17:00.923267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:42.799 [2024-11-26 21:17:00.923449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:42.799 [2024-11-26 21:17:00.923488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:42.799 [2024-11-26 21:17:00.923654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.799 pt3 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:42.799 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.800 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.060 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.060 "name": "raid_bdev1", 00:09:43.060 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:43.060 "strip_size_kb": 0, 00:09:43.060 "state": "online", 00:09:43.060 "raid_level": "raid1", 00:09:43.060 "superblock": true, 00:09:43.060 "num_base_bdevs": 3, 00:09:43.060 "num_base_bdevs_discovered": 3, 00:09:43.060 "num_base_bdevs_operational": 3, 00:09:43.060 "base_bdevs_list": [ 00:09:43.060 { 00:09:43.060 "name": "pt1", 00:09:43.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.060 "is_configured": true, 00:09:43.060 "data_offset": 2048, 00:09:43.060 "data_size": 63488 00:09:43.060 }, 00:09:43.060 { 00:09:43.060 "name": "pt2", 00:09:43.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.060 "is_configured": true, 00:09:43.060 "data_offset": 2048, 00:09:43.060 "data_size": 63488 00:09:43.060 }, 00:09:43.060 { 00:09:43.060 "name": "pt3", 00:09:43.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.060 "is_configured": true, 00:09:43.060 "data_offset": 2048, 00:09:43.060 "data_size": 63488 00:09:43.060 } 00:09:43.060 ] 00:09:43.060 }' 00:09:43.060 21:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.060 21:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.321 [2024-11-26 21:17:01.377349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.321 "name": "raid_bdev1", 00:09:43.321 "aliases": [ 00:09:43.321 "3289dad7-7c43-49f3-90fe-5ec04e352d75" 00:09:43.321 ], 00:09:43.321 "product_name": "Raid Volume", 00:09:43.321 "block_size": 512, 00:09:43.321 "num_blocks": 63488, 00:09:43.321 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:43.321 "assigned_rate_limits": { 00:09:43.321 "rw_ios_per_sec": 0, 00:09:43.321 "rw_mbytes_per_sec": 0, 00:09:43.321 "r_mbytes_per_sec": 0, 00:09:43.321 "w_mbytes_per_sec": 0 00:09:43.321 }, 00:09:43.321 "claimed": false, 00:09:43.321 "zoned": false, 00:09:43.321 "supported_io_types": { 00:09:43.321 "read": true, 00:09:43.321 "write": true, 00:09:43.321 "unmap": false, 00:09:43.321 "flush": false, 00:09:43.321 "reset": true, 00:09:43.321 "nvme_admin": false, 00:09:43.321 "nvme_io": false, 00:09:43.321 "nvme_io_md": false, 00:09:43.321 "write_zeroes": true, 00:09:43.321 "zcopy": false, 00:09:43.321 "get_zone_info": false, 00:09:43.321 "zone_management": false, 00:09:43.321 "zone_append": false, 00:09:43.321 "compare": false, 00:09:43.321 "compare_and_write": false, 00:09:43.321 "abort": false, 00:09:43.321 "seek_hole": false, 00:09:43.321 "seek_data": false, 00:09:43.321 "copy": false, 00:09:43.321 "nvme_iov_md": false 00:09:43.321 }, 00:09:43.321 "memory_domains": [ 00:09:43.321 { 00:09:43.321 "dma_device_id": "system", 00:09:43.321 "dma_device_type": 1 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.321 "dma_device_type": 2 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "dma_device_id": "system", 00:09:43.321 "dma_device_type": 1 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.321 "dma_device_type": 2 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "dma_device_id": "system", 00:09:43.321 "dma_device_type": 1 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.321 "dma_device_type": 2 00:09:43.321 } 00:09:43.321 ], 00:09:43.321 "driver_specific": { 00:09:43.321 "raid": { 00:09:43.321 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:43.321 "strip_size_kb": 0, 00:09:43.321 "state": "online", 00:09:43.321 "raid_level": "raid1", 00:09:43.321 "superblock": true, 00:09:43.321 "num_base_bdevs": 3, 00:09:43.321 "num_base_bdevs_discovered": 3, 00:09:43.321 "num_base_bdevs_operational": 3, 00:09:43.321 "base_bdevs_list": [ 00:09:43.321 { 00:09:43.321 "name": "pt1", 00:09:43.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.321 "is_configured": true, 00:09:43.321 "data_offset": 2048, 00:09:43.321 "data_size": 63488 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "name": "pt2", 00:09:43.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.321 "is_configured": true, 00:09:43.321 "data_offset": 2048, 00:09:43.321 "data_size": 63488 00:09:43.321 }, 00:09:43.321 { 00:09:43.321 "name": "pt3", 00:09:43.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.321 "is_configured": true, 00:09:43.321 "data_offset": 2048, 00:09:43.321 "data_size": 63488 00:09:43.321 } 00:09:43.321 ] 00:09:43.321 } 00:09:43.321 } 00:09:43.321 }' 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:43.321 pt2 00:09:43.321 pt3' 00:09:43.321 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.581 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 [2024-11-26 21:17:01.644885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3289dad7-7c43-49f3-90fe-5ec04e352d75 '!=' 3289dad7-7c43-49f3-90fe-5ec04e352d75 ']' 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 [2024-11-26 21:17:01.692547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.582 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.841 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.841 "name": "raid_bdev1", 00:09:43.841 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:43.841 "strip_size_kb": 0, 00:09:43.841 "state": "online", 00:09:43.841 "raid_level": "raid1", 00:09:43.841 "superblock": true, 00:09:43.841 "num_base_bdevs": 3, 00:09:43.841 "num_base_bdevs_discovered": 2, 00:09:43.841 "num_base_bdevs_operational": 2, 00:09:43.841 "base_bdevs_list": [ 00:09:43.841 { 00:09:43.841 "name": null, 00:09:43.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.841 "is_configured": false, 00:09:43.841 "data_offset": 0, 00:09:43.841 "data_size": 63488 00:09:43.841 }, 00:09:43.841 { 00:09:43.841 "name": "pt2", 00:09:43.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.841 "is_configured": true, 00:09:43.841 "data_offset": 2048, 00:09:43.841 "data_size": 63488 00:09:43.841 }, 00:09:43.841 { 00:09:43.841 "name": "pt3", 00:09:43.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.841 "is_configured": true, 00:09:43.841 "data_offset": 2048, 00:09:43.841 "data_size": 63488 00:09:43.841 } 00:09:43.841 ] 00:09:43.841 }' 00:09:43.841 21:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.841 21:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.103 [2024-11-26 21:17:02.163695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.103 [2024-11-26 21:17:02.163807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.103 [2024-11-26 21:17:02.163919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.103 [2024-11-26 21:17:02.164018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.103 [2024-11-26 21:17:02.164085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.103 [2024-11-26 21:17:02.247507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.103 [2024-11-26 21:17:02.247567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.103 [2024-11-26 21:17:02.247600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:44.103 [2024-11-26 21:17:02.247611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.103 [2024-11-26 21:17:02.249803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.103 [2024-11-26 21:17:02.249847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.103 [2024-11-26 21:17:02.249920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.103 [2024-11-26 21:17:02.249982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.103 pt2 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.103 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.364 "name": "raid_bdev1", 00:09:44.364 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:44.364 "strip_size_kb": 0, 00:09:44.364 "state": "configuring", 00:09:44.364 "raid_level": "raid1", 00:09:44.364 "superblock": true, 00:09:44.364 "num_base_bdevs": 3, 00:09:44.364 "num_base_bdevs_discovered": 1, 00:09:44.364 "num_base_bdevs_operational": 2, 00:09:44.364 "base_bdevs_list": [ 00:09:44.364 { 00:09:44.364 "name": null, 00:09:44.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.364 "is_configured": false, 00:09:44.364 "data_offset": 2048, 00:09:44.364 "data_size": 63488 00:09:44.364 }, 00:09:44.364 { 00:09:44.364 "name": "pt2", 00:09:44.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.364 "is_configured": true, 00:09:44.364 "data_offset": 2048, 00:09:44.364 "data_size": 63488 00:09:44.364 }, 00:09:44.364 { 00:09:44.364 "name": null, 00:09:44.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.364 "is_configured": false, 00:09:44.364 "data_offset": 2048, 00:09:44.364 "data_size": 63488 00:09:44.364 } 00:09:44.364 ] 00:09:44.364 }' 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.364 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.625 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:44.625 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:44.625 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:44.625 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.626 [2024-11-26 21:17:02.714758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.626 [2024-11-26 21:17:02.714946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.626 [2024-11-26 21:17:02.715007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:44.626 [2024-11-26 21:17:02.715042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.626 [2024-11-26 21:17:02.715520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.626 [2024-11-26 21:17:02.715588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.626 [2024-11-26 21:17:02.715720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.626 [2024-11-26 21:17:02.715781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.626 [2024-11-26 21:17:02.715946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.626 [2024-11-26 21:17:02.716006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.626 [2024-11-26 21:17:02.716294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:44.626 [2024-11-26 21:17:02.716490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.626 [2024-11-26 21:17:02.716532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:44.626 [2024-11-26 21:17:02.716715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.626 pt3 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.626 "name": "raid_bdev1", 00:09:44.626 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:44.626 "strip_size_kb": 0, 00:09:44.626 "state": "online", 00:09:44.626 "raid_level": "raid1", 00:09:44.626 "superblock": true, 00:09:44.626 "num_base_bdevs": 3, 00:09:44.626 "num_base_bdevs_discovered": 2, 00:09:44.626 "num_base_bdevs_operational": 2, 00:09:44.626 "base_bdevs_list": [ 00:09:44.626 { 00:09:44.626 "name": null, 00:09:44.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.626 "is_configured": false, 00:09:44.626 "data_offset": 2048, 00:09:44.626 "data_size": 63488 00:09:44.626 }, 00:09:44.626 { 00:09:44.626 "name": "pt2", 00:09:44.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.626 "is_configured": true, 00:09:44.626 "data_offset": 2048, 00:09:44.626 "data_size": 63488 00:09:44.626 }, 00:09:44.626 { 00:09:44.626 "name": "pt3", 00:09:44.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.626 "is_configured": true, 00:09:44.626 "data_offset": 2048, 00:09:44.626 "data_size": 63488 00:09:44.626 } 00:09:44.626 ] 00:09:44.626 }' 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.626 21:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.195 [2024-11-26 21:17:03.189910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.195 [2024-11-26 21:17:03.189946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.195 [2024-11-26 21:17:03.190044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.195 [2024-11-26 21:17:03.190118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.195 [2024-11-26 21:17:03.190128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.195 [2024-11-26 21:17:03.265774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.195 [2024-11-26 21:17:03.265899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.195 [2024-11-26 21:17:03.265921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:45.195 [2024-11-26 21:17:03.265930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.195 [2024-11-26 21:17:03.268122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.195 [2024-11-26 21:17:03.268160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.195 [2024-11-26 21:17:03.268235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.195 [2024-11-26 21:17:03.268277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.195 [2024-11-26 21:17:03.268401] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:45.195 [2024-11-26 21:17:03.268411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.195 [2024-11-26 21:17:03.268425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:45.195 [2024-11-26 21:17:03.268481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.195 pt1 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.195 "name": "raid_bdev1", 00:09:45.195 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:45.195 "strip_size_kb": 0, 00:09:45.195 "state": "configuring", 00:09:45.195 "raid_level": "raid1", 00:09:45.195 "superblock": true, 00:09:45.195 "num_base_bdevs": 3, 00:09:45.195 "num_base_bdevs_discovered": 1, 00:09:45.195 "num_base_bdevs_operational": 2, 00:09:45.195 "base_bdevs_list": [ 00:09:45.195 { 00:09:45.195 "name": null, 00:09:45.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.195 "is_configured": false, 00:09:45.195 "data_offset": 2048, 00:09:45.195 "data_size": 63488 00:09:45.195 }, 00:09:45.195 { 00:09:45.195 "name": "pt2", 00:09:45.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.195 "is_configured": true, 00:09:45.195 "data_offset": 2048, 00:09:45.195 "data_size": 63488 00:09:45.195 }, 00:09:45.195 { 00:09:45.195 "name": null, 00:09:45.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.195 "is_configured": false, 00:09:45.195 "data_offset": 2048, 00:09:45.195 "data_size": 63488 00:09:45.195 } 00:09:45.195 ] 00:09:45.195 }' 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.195 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.765 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 [2024-11-26 21:17:03.752945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.765 [2024-11-26 21:17:03.753074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.765 [2024-11-26 21:17:03.753116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:45.765 [2024-11-26 21:17:03.753144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.765 [2024-11-26 21:17:03.753614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.765 [2024-11-26 21:17:03.753671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.765 [2024-11-26 21:17:03.753785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:45.765 [2024-11-26 21:17:03.753836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.766 [2024-11-26 21:17:03.754004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:45.766 [2024-11-26 21:17:03.754043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.766 [2024-11-26 21:17:03.754293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:45.766 [2024-11-26 21:17:03.754484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:45.766 [2024-11-26 21:17:03.754531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:45.766 [2024-11-26 21:17:03.754705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.766 pt3 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.766 "name": "raid_bdev1", 00:09:45.766 "uuid": "3289dad7-7c43-49f3-90fe-5ec04e352d75", 00:09:45.766 "strip_size_kb": 0, 00:09:45.766 "state": "online", 00:09:45.766 "raid_level": "raid1", 00:09:45.766 "superblock": true, 00:09:45.766 "num_base_bdevs": 3, 00:09:45.766 "num_base_bdevs_discovered": 2, 00:09:45.766 "num_base_bdevs_operational": 2, 00:09:45.766 "base_bdevs_list": [ 00:09:45.766 { 00:09:45.766 "name": null, 00:09:45.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.766 "is_configured": false, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 }, 00:09:45.766 { 00:09:45.766 "name": "pt2", 00:09:45.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.766 "is_configured": true, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 }, 00:09:45.766 { 00:09:45.766 "name": "pt3", 00:09:45.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.766 "is_configured": true, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 } 00:09:45.766 ] 00:09:45.766 }' 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.766 21:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.337 [2024-11-26 21:17:04.256341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3289dad7-7c43-49f3-90fe-5ec04e352d75 '!=' 3289dad7-7c43-49f3-90fe-5ec04e352d75 ']' 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68472 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68472 ']' 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68472 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68472 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68472' 00:09:46.337 killing process with pid 68472 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68472 00:09:46.337 [2024-11-26 21:17:04.321441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.337 [2024-11-26 21:17:04.321588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.337 21:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68472 00:09:46.337 [2024-11-26 21:17:04.321674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.337 [2024-11-26 21:17:04.321689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:46.597 [2024-11-26 21:17:04.611199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.537 21:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:47.537 00:09:47.537 real 0m7.675s 00:09:47.537 user 0m12.062s 00:09:47.537 sys 0m1.327s 00:09:47.537 ************************************ 00:09:47.537 END TEST raid_superblock_test 00:09:47.537 ************************************ 00:09:47.537 21:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.537 21:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.797 21:17:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:47.797 21:17:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.797 21:17:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.797 21:17:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.797 ************************************ 00:09:47.797 START TEST raid_read_error_test 00:09:47.797 ************************************ 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:47.797 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.K9D4UXvFTN 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68920 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68920 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68920 ']' 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.798 21:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.798 [2024-11-26 21:17:05.878750] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:47.798 [2024-11-26 21:17:05.878885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68920 ] 00:09:48.057 [2024-11-26 21:17:06.050071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.057 [2024-11-26 21:17:06.155211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.318 [2024-11-26 21:17:06.344773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.318 [2024-11-26 21:17:06.344833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.578 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.578 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.578 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.578 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.578 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.578 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 BaseBdev1_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 true 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 [2024-11-26 21:17:06.755135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.838 [2024-11-26 21:17:06.755195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.838 [2024-11-26 21:17:06.755214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.838 [2024-11-26 21:17:06.755224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.838 [2024-11-26 21:17:06.757234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.838 [2024-11-26 21:17:06.757274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.838 BaseBdev1 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 BaseBdev2_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 true 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 [2024-11-26 21:17:06.806276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.838 [2024-11-26 21:17:06.806329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.838 [2024-11-26 21:17:06.806361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.838 [2024-11-26 21:17:06.806371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.838 [2024-11-26 21:17:06.808368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.838 [2024-11-26 21:17:06.808408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.838 BaseBdev2 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 BaseBdev3_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 true 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 [2024-11-26 21:17:06.871307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:48.838 [2024-11-26 21:17:06.871460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.838 [2024-11-26 21:17:06.871480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.838 [2024-11-26 21:17:06.871491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.838 [2024-11-26 21:17:06.873504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.838 [2024-11-26 21:17:06.873542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:48.838 BaseBdev3 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 [2024-11-26 21:17:06.879371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.838 [2024-11-26 21:17:06.881151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.838 [2024-11-26 21:17:06.881217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.838 [2024-11-26 21:17:06.881414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.838 [2024-11-26 21:17:06.881426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.838 [2024-11-26 21:17:06.881647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:48.838 [2024-11-26 21:17:06.881803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.838 [2024-11-26 21:17:06.881813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.838 [2024-11-26 21:17:06.881980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.838 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.838 "name": "raid_bdev1", 00:09:48.838 "uuid": "cf48fc54-0b8c-4188-96b3-ea5a0eb2011f", 00:09:48.839 "strip_size_kb": 0, 00:09:48.839 "state": "online", 00:09:48.839 "raid_level": "raid1", 00:09:48.839 "superblock": true, 00:09:48.839 "num_base_bdevs": 3, 00:09:48.839 "num_base_bdevs_discovered": 3, 00:09:48.839 "num_base_bdevs_operational": 3, 00:09:48.839 "base_bdevs_list": [ 00:09:48.839 { 00:09:48.839 "name": "BaseBdev1", 00:09:48.839 "uuid": "ec2a76f9-9c4d-586f-bfd8-d2053939dc0f", 00:09:48.839 "is_configured": true, 00:09:48.839 "data_offset": 2048, 00:09:48.839 "data_size": 63488 00:09:48.839 }, 00:09:48.839 { 00:09:48.839 "name": "BaseBdev2", 00:09:48.839 "uuid": "6c10b254-06b4-5ee2-9bc9-88fb9be8edd5", 00:09:48.839 "is_configured": true, 00:09:48.839 "data_offset": 2048, 00:09:48.839 "data_size": 63488 00:09:48.839 }, 00:09:48.839 { 00:09:48.839 "name": "BaseBdev3", 00:09:48.839 "uuid": "b386f50a-ec4c-516e-8a9f-cbb13c4ba14f", 00:09:48.839 "is_configured": true, 00:09:48.839 "data_offset": 2048, 00:09:48.839 "data_size": 63488 00:09:48.839 } 00:09:48.839 ] 00:09:48.839 }' 00:09:48.839 21:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.839 21:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.408 21:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.408 21:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.408 [2024-11-26 21:17:07.391793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.347 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:50.347 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.347 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.348 "name": "raid_bdev1", 00:09:50.348 "uuid": "cf48fc54-0b8c-4188-96b3-ea5a0eb2011f", 00:09:50.348 "strip_size_kb": 0, 00:09:50.348 "state": "online", 00:09:50.348 "raid_level": "raid1", 00:09:50.348 "superblock": true, 00:09:50.348 "num_base_bdevs": 3, 00:09:50.348 "num_base_bdevs_discovered": 3, 00:09:50.348 "num_base_bdevs_operational": 3, 00:09:50.348 "base_bdevs_list": [ 00:09:50.348 { 00:09:50.348 "name": "BaseBdev1", 00:09:50.348 "uuid": "ec2a76f9-9c4d-586f-bfd8-d2053939dc0f", 00:09:50.348 "is_configured": true, 00:09:50.348 "data_offset": 2048, 00:09:50.348 "data_size": 63488 00:09:50.348 }, 00:09:50.348 { 00:09:50.348 "name": "BaseBdev2", 00:09:50.348 "uuid": "6c10b254-06b4-5ee2-9bc9-88fb9be8edd5", 00:09:50.348 "is_configured": true, 00:09:50.348 "data_offset": 2048, 00:09:50.348 "data_size": 63488 00:09:50.348 }, 00:09:50.348 { 00:09:50.348 "name": "BaseBdev3", 00:09:50.348 "uuid": "b386f50a-ec4c-516e-8a9f-cbb13c4ba14f", 00:09:50.348 "is_configured": true, 00:09:50.348 "data_offset": 2048, 00:09:50.348 "data_size": 63488 00:09:50.348 } 00:09:50.348 ] 00:09:50.348 }' 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.348 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.608 [2024-11-26 21:17:08.706017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.608 [2024-11-26 21:17:08.706053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.608 [2024-11-26 21:17:08.708798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.608 [2024-11-26 21:17:08.708853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.608 [2024-11-26 21:17:08.708950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.608 [2024-11-26 21:17:08.708978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.608 { 00:09:50.608 "results": [ 00:09:50.608 { 00:09:50.608 "job": "raid_bdev1", 00:09:50.608 "core_mask": "0x1", 00:09:50.608 "workload": "randrw", 00:09:50.608 "percentage": 50, 00:09:50.608 "status": "finished", 00:09:50.608 "queue_depth": 1, 00:09:50.608 "io_size": 131072, 00:09:50.608 "runtime": 1.315141, 00:09:50.608 "iops": 13953.636910414929, 00:09:50.608 "mibps": 1744.2046138018661, 00:09:50.608 "io_failed": 0, 00:09:50.608 "io_timeout": 0, 00:09:50.608 "avg_latency_us": 69.09534775421254, 00:09:50.608 "min_latency_us": 22.358078602620086, 00:09:50.608 "max_latency_us": 1416.6078602620087 00:09:50.608 } 00:09:50.608 ], 00:09:50.608 "core_count": 1 00:09:50.608 } 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68920 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68920 ']' 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68920 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68920 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.608 killing process with pid 68920 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68920' 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68920 00:09:50.608 [2024-11-26 21:17:08.758217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.608 21:17:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68920 00:09:50.868 [2024-11-26 21:17:08.981767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.K9D4UXvFTN 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:52.250 00:09:52.250 real 0m4.382s 00:09:52.250 user 0m5.174s 00:09:52.250 sys 0m0.564s 00:09:52.250 ************************************ 00:09:52.250 END TEST raid_read_error_test 00:09:52.250 ************************************ 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.250 21:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.250 21:17:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:52.250 21:17:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.250 21:17:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.250 21:17:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.250 ************************************ 00:09:52.250 START TEST raid_write_error_test 00:09:52.250 ************************************ 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.skAtIjB82X 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69066 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69066 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69066 ']' 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.250 21:17:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.250 [2024-11-26 21:17:10.305009] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:52.250 [2024-11-26 21:17:10.305218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69066 ] 00:09:52.518 [2024-11-26 21:17:10.476585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.518 [2024-11-26 21:17:10.582092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.789 [2024-11-26 21:17:10.769627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.789 [2024-11-26 21:17:10.769786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.049 BaseBdev1_malloc 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.049 true 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.049 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.049 [2024-11-26 21:17:11.172533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.049 [2024-11-26 21:17:11.172594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.049 [2024-11-26 21:17:11.172614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.050 [2024-11-26 21:17:11.172625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.050 [2024-11-26 21:17:11.174624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.050 [2024-11-26 21:17:11.174750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.050 BaseBdev1 00:09:53.050 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.050 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.050 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.050 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.050 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 BaseBdev2_malloc 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 true 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 [2024-11-26 21:17:11.227324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.311 [2024-11-26 21:17:11.227387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.311 [2024-11-26 21:17:11.227405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.311 [2024-11-26 21:17:11.227415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.311 [2024-11-26 21:17:11.229505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.311 [2024-11-26 21:17:11.229547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.311 BaseBdev2 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 BaseBdev3_malloc 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 true 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 [2024-11-26 21:17:11.296354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:53.311 [2024-11-26 21:17:11.296410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.311 [2024-11-26 21:17:11.296442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:53.311 [2024-11-26 21:17:11.296452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.311 [2024-11-26 21:17:11.298440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.311 [2024-11-26 21:17:11.298572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:53.311 BaseBdev3 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 [2024-11-26 21:17:11.304409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.311 [2024-11-26 21:17:11.306137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.311 [2024-11-26 21:17:11.306237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.311 [2024-11-26 21:17:11.306483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:53.311 [2024-11-26 21:17:11.306532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:53.311 [2024-11-26 21:17:11.306774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:53.311 [2024-11-26 21:17:11.306977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:53.311 [2024-11-26 21:17:11.307021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:53.311 [2024-11-26 21:17:11.307197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.311 "name": "raid_bdev1", 00:09:53.311 "uuid": "9fb796cd-e1f6-4e70-852a-3de431a6143b", 00:09:53.311 "strip_size_kb": 0, 00:09:53.311 "state": "online", 00:09:53.311 "raid_level": "raid1", 00:09:53.311 "superblock": true, 00:09:53.311 "num_base_bdevs": 3, 00:09:53.311 "num_base_bdevs_discovered": 3, 00:09:53.311 "num_base_bdevs_operational": 3, 00:09:53.311 "base_bdevs_list": [ 00:09:53.311 { 00:09:53.311 "name": "BaseBdev1", 00:09:53.311 "uuid": "7a769d47-2bf6-5ab8-b33c-919d998a96a1", 00:09:53.311 "is_configured": true, 00:09:53.311 "data_offset": 2048, 00:09:53.311 "data_size": 63488 00:09:53.311 }, 00:09:53.311 { 00:09:53.311 "name": "BaseBdev2", 00:09:53.311 "uuid": "8b7f5e4e-21ff-5334-b03f-552b0ee08220", 00:09:53.311 "is_configured": true, 00:09:53.311 "data_offset": 2048, 00:09:53.311 "data_size": 63488 00:09:53.311 }, 00:09:53.311 { 00:09:53.311 "name": "BaseBdev3", 00:09:53.311 "uuid": "83009372-077f-5d77-970c-3a68306f811f", 00:09:53.311 "is_configured": true, 00:09:53.311 "data_offset": 2048, 00:09:53.311 "data_size": 63488 00:09:53.311 } 00:09:53.311 ] 00:09:53.311 }' 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.311 21:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.881 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.881 21:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.881 [2024-11-26 21:17:11.844829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.848 [2024-11-26 21:17:12.767844] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:54.848 [2024-11-26 21:17:12.767901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.848 [2024-11-26 21:17:12.768132] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.848 "name": "raid_bdev1", 00:09:54.848 "uuid": "9fb796cd-e1f6-4e70-852a-3de431a6143b", 00:09:54.848 "strip_size_kb": 0, 00:09:54.848 "state": "online", 00:09:54.848 "raid_level": "raid1", 00:09:54.848 "superblock": true, 00:09:54.848 "num_base_bdevs": 3, 00:09:54.848 "num_base_bdevs_discovered": 2, 00:09:54.848 "num_base_bdevs_operational": 2, 00:09:54.848 "base_bdevs_list": [ 00:09:54.848 { 00:09:54.848 "name": null, 00:09:54.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.848 "is_configured": false, 00:09:54.848 "data_offset": 0, 00:09:54.848 "data_size": 63488 00:09:54.848 }, 00:09:54.848 { 00:09:54.848 "name": "BaseBdev2", 00:09:54.848 "uuid": "8b7f5e4e-21ff-5334-b03f-552b0ee08220", 00:09:54.848 "is_configured": true, 00:09:54.848 "data_offset": 2048, 00:09:54.848 "data_size": 63488 00:09:54.848 }, 00:09:54.848 { 00:09:54.848 "name": "BaseBdev3", 00:09:54.848 "uuid": "83009372-077f-5d77-970c-3a68306f811f", 00:09:54.848 "is_configured": true, 00:09:54.848 "data_offset": 2048, 00:09:54.848 "data_size": 63488 00:09:54.848 } 00:09:54.848 ] 00:09:54.848 }' 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.848 21:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.121 [2024-11-26 21:17:13.182121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.121 [2024-11-26 21:17:13.182159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.121 [2024-11-26 21:17:13.184870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.121 [2024-11-26 21:17:13.184934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.121 [2024-11-26 21:17:13.185025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.121 [2024-11-26 21:17:13.185041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:55.121 { 00:09:55.121 "results": [ 00:09:55.121 { 00:09:55.121 "job": "raid_bdev1", 00:09:55.121 "core_mask": "0x1", 00:09:55.121 "workload": "randrw", 00:09:55.121 "percentage": 50, 00:09:55.121 "status": "finished", 00:09:55.121 "queue_depth": 1, 00:09:55.121 "io_size": 131072, 00:09:55.121 "runtime": 1.338161, 00:09:55.121 "iops": 15232.845674025772, 00:09:55.121 "mibps": 1904.1057092532214, 00:09:55.121 "io_failed": 0, 00:09:55.121 "io_timeout": 0, 00:09:55.121 "avg_latency_us": 63.068364947591405, 00:09:55.121 "min_latency_us": 22.358078602620086, 00:09:55.121 "max_latency_us": 1380.8349344978167 00:09:55.121 } 00:09:55.121 ], 00:09:55.121 "core_count": 1 00:09:55.121 } 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69066 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69066 ']' 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69066 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69066 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.121 killing process with pid 69066 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69066' 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69066 00:09:55.121 [2024-11-26 21:17:13.234049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.121 21:17:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69066 00:09:55.382 [2024-11-26 21:17:13.459725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.skAtIjB82X 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:56.762 ************************************ 00:09:56.762 END TEST raid_write_error_test 00:09:56.762 ************************************ 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:56.762 00:09:56.762 real 0m4.401s 00:09:56.762 user 0m5.200s 00:09:56.762 sys 0m0.551s 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.762 21:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.762 21:17:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:56.762 21:17:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:56.762 21:17:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:56.762 21:17:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.762 21:17:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.762 21:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.762 ************************************ 00:09:56.762 START TEST raid_state_function_test 00:09:56.762 ************************************ 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:56.762 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69204 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69204' 00:09:56.763 Process raid pid: 69204 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69204 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69204 ']' 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.763 21:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.763 [2024-11-26 21:17:14.765669] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:56.763 [2024-11-26 21:17:14.765867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.022 [2024-11-26 21:17:14.941824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.022 [2024-11-26 21:17:15.052640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.396 [2024-11-26 21:17:15.246809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.396 [2024-11-26 21:17:15.246921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.657 [2024-11-26 21:17:15.594978] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.657 [2024-11-26 21:17:15.595034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.657 [2024-11-26 21:17:15.595045] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.657 [2024-11-26 21:17:15.595070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.657 [2024-11-26 21:17:15.595076] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.657 [2024-11-26 21:17:15.595085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.657 [2024-11-26 21:17:15.595091] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.657 [2024-11-26 21:17:15.595099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.657 "name": "Existed_Raid", 00:09:57.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.657 "strip_size_kb": 64, 00:09:57.657 "state": "configuring", 00:09:57.657 "raid_level": "raid0", 00:09:57.657 "superblock": false, 00:09:57.657 "num_base_bdevs": 4, 00:09:57.657 "num_base_bdevs_discovered": 0, 00:09:57.657 "num_base_bdevs_operational": 4, 00:09:57.657 "base_bdevs_list": [ 00:09:57.657 { 00:09:57.657 "name": "BaseBdev1", 00:09:57.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.657 "is_configured": false, 00:09:57.657 "data_offset": 0, 00:09:57.657 "data_size": 0 00:09:57.657 }, 00:09:57.657 { 00:09:57.657 "name": "BaseBdev2", 00:09:57.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.657 "is_configured": false, 00:09:57.657 "data_offset": 0, 00:09:57.657 "data_size": 0 00:09:57.657 }, 00:09:57.657 { 00:09:57.657 "name": "BaseBdev3", 00:09:57.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.657 "is_configured": false, 00:09:57.657 "data_offset": 0, 00:09:57.657 "data_size": 0 00:09:57.657 }, 00:09:57.657 { 00:09:57.657 "name": "BaseBdev4", 00:09:57.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.657 "is_configured": false, 00:09:57.657 "data_offset": 0, 00:09:57.657 "data_size": 0 00:09:57.657 } 00:09:57.657 ] 00:09:57.657 }' 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.657 21:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.917 [2024-11-26 21:17:16.022168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.917 [2024-11-26 21:17:16.022258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.917 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.918 [2024-11-26 21:17:16.030162] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.918 [2024-11-26 21:17:16.030246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.918 [2024-11-26 21:17:16.030273] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.918 [2024-11-26 21:17:16.030294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.918 [2024-11-26 21:17:16.030312] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.918 [2024-11-26 21:17:16.030332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.918 [2024-11-26 21:17:16.030349] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.918 [2024-11-26 21:17:16.030369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.918 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.918 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.918 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.918 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.178 [2024-11-26 21:17:16.073274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.178 BaseBdev1 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.178 [ 00:09:58.178 { 00:09:58.178 "name": "BaseBdev1", 00:09:58.178 "aliases": [ 00:09:58.178 "262f4629-6b94-43cc-845d-28b1e9502d0f" 00:09:58.178 ], 00:09:58.178 "product_name": "Malloc disk", 00:09:58.178 "block_size": 512, 00:09:58.178 "num_blocks": 65536, 00:09:58.178 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:09:58.178 "assigned_rate_limits": { 00:09:58.178 "rw_ios_per_sec": 0, 00:09:58.178 "rw_mbytes_per_sec": 0, 00:09:58.178 "r_mbytes_per_sec": 0, 00:09:58.178 "w_mbytes_per_sec": 0 00:09:58.178 }, 00:09:58.178 "claimed": true, 00:09:58.178 "claim_type": "exclusive_write", 00:09:58.178 "zoned": false, 00:09:58.178 "supported_io_types": { 00:09:58.178 "read": true, 00:09:58.178 "write": true, 00:09:58.178 "unmap": true, 00:09:58.178 "flush": true, 00:09:58.178 "reset": true, 00:09:58.178 "nvme_admin": false, 00:09:58.178 "nvme_io": false, 00:09:58.178 "nvme_io_md": false, 00:09:58.178 "write_zeroes": true, 00:09:58.178 "zcopy": true, 00:09:58.178 "get_zone_info": false, 00:09:58.178 "zone_management": false, 00:09:58.178 "zone_append": false, 00:09:58.178 "compare": false, 00:09:58.178 "compare_and_write": false, 00:09:58.178 "abort": true, 00:09:58.178 "seek_hole": false, 00:09:58.178 "seek_data": false, 00:09:58.178 "copy": true, 00:09:58.178 "nvme_iov_md": false 00:09:58.178 }, 00:09:58.178 "memory_domains": [ 00:09:58.178 { 00:09:58.178 "dma_device_id": "system", 00:09:58.178 "dma_device_type": 1 00:09:58.178 }, 00:09:58.178 { 00:09:58.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.178 "dma_device_type": 2 00:09:58.178 } 00:09:58.178 ], 00:09:58.178 "driver_specific": {} 00:09:58.178 } 00:09:58.178 ] 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.178 "name": "Existed_Raid", 00:09:58.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.178 "strip_size_kb": 64, 00:09:58.178 "state": "configuring", 00:09:58.178 "raid_level": "raid0", 00:09:58.178 "superblock": false, 00:09:58.178 "num_base_bdevs": 4, 00:09:58.178 "num_base_bdevs_discovered": 1, 00:09:58.178 "num_base_bdevs_operational": 4, 00:09:58.178 "base_bdevs_list": [ 00:09:58.178 { 00:09:58.178 "name": "BaseBdev1", 00:09:58.178 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:09:58.178 "is_configured": true, 00:09:58.178 "data_offset": 0, 00:09:58.178 "data_size": 65536 00:09:58.178 }, 00:09:58.178 { 00:09:58.178 "name": "BaseBdev2", 00:09:58.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.178 "is_configured": false, 00:09:58.178 "data_offset": 0, 00:09:58.178 "data_size": 0 00:09:58.178 }, 00:09:58.178 { 00:09:58.178 "name": "BaseBdev3", 00:09:58.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.178 "is_configured": false, 00:09:58.178 "data_offset": 0, 00:09:58.178 "data_size": 0 00:09:58.178 }, 00:09:58.178 { 00:09:58.178 "name": "BaseBdev4", 00:09:58.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.178 "is_configured": false, 00:09:58.178 "data_offset": 0, 00:09:58.178 "data_size": 0 00:09:58.178 } 00:09:58.178 ] 00:09:58.178 }' 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.178 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.438 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.438 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.438 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.438 [2024-11-26 21:17:16.572528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.438 [2024-11-26 21:17:16.572589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.439 [2024-11-26 21:17:16.580558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.439 [2024-11-26 21:17:16.582442] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.439 [2024-11-26 21:17:16.582482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.439 [2024-11-26 21:17:16.582492] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.439 [2024-11-26 21:17:16.582502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.439 [2024-11-26 21:17:16.582509] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.439 [2024-11-26 21:17:16.582517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.439 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.699 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.699 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.699 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.699 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.699 "name": "Existed_Raid", 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.699 "strip_size_kb": 64, 00:09:58.699 "state": "configuring", 00:09:58.699 "raid_level": "raid0", 00:09:58.699 "superblock": false, 00:09:58.699 "num_base_bdevs": 4, 00:09:58.699 "num_base_bdevs_discovered": 1, 00:09:58.699 "num_base_bdevs_operational": 4, 00:09:58.699 "base_bdevs_list": [ 00:09:58.699 { 00:09:58.699 "name": "BaseBdev1", 00:09:58.699 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:09:58.699 "is_configured": true, 00:09:58.699 "data_offset": 0, 00:09:58.699 "data_size": 65536 00:09:58.699 }, 00:09:58.699 { 00:09:58.699 "name": "BaseBdev2", 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.699 "is_configured": false, 00:09:58.699 "data_offset": 0, 00:09:58.699 "data_size": 0 00:09:58.699 }, 00:09:58.699 { 00:09:58.699 "name": "BaseBdev3", 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.699 "is_configured": false, 00:09:58.699 "data_offset": 0, 00:09:58.699 "data_size": 0 00:09:58.699 }, 00:09:58.699 { 00:09:58.699 "name": "BaseBdev4", 00:09:58.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.699 "is_configured": false, 00:09:58.699 "data_offset": 0, 00:09:58.699 "data_size": 0 00:09:58.699 } 00:09:58.699 ] 00:09:58.699 }' 00:09:58.699 21:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.699 21:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 [2024-11-26 21:17:17.069819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.959 BaseBdev2 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 [ 00:09:58.959 { 00:09:58.959 "name": "BaseBdev2", 00:09:58.959 "aliases": [ 00:09:58.959 "e4974c72-d9bf-479d-9b57-28c7180cb87b" 00:09:58.959 ], 00:09:58.959 "product_name": "Malloc disk", 00:09:58.959 "block_size": 512, 00:09:58.959 "num_blocks": 65536, 00:09:58.959 "uuid": "e4974c72-d9bf-479d-9b57-28c7180cb87b", 00:09:58.959 "assigned_rate_limits": { 00:09:58.959 "rw_ios_per_sec": 0, 00:09:58.959 "rw_mbytes_per_sec": 0, 00:09:58.959 "r_mbytes_per_sec": 0, 00:09:58.959 "w_mbytes_per_sec": 0 00:09:58.959 }, 00:09:58.959 "claimed": true, 00:09:58.959 "claim_type": "exclusive_write", 00:09:58.959 "zoned": false, 00:09:58.959 "supported_io_types": { 00:09:58.959 "read": true, 00:09:58.959 "write": true, 00:09:58.959 "unmap": true, 00:09:58.959 "flush": true, 00:09:58.959 "reset": true, 00:09:58.959 "nvme_admin": false, 00:09:58.959 "nvme_io": false, 00:09:58.959 "nvme_io_md": false, 00:09:58.959 "write_zeroes": true, 00:09:58.959 "zcopy": true, 00:09:58.959 "get_zone_info": false, 00:09:58.959 "zone_management": false, 00:09:58.959 "zone_append": false, 00:09:58.959 "compare": false, 00:09:58.959 "compare_and_write": false, 00:09:58.959 "abort": true, 00:09:58.959 "seek_hole": false, 00:09:58.959 "seek_data": false, 00:09:58.959 "copy": true, 00:09:58.959 "nvme_iov_md": false 00:09:58.959 }, 00:09:58.959 "memory_domains": [ 00:09:58.959 { 00:09:58.959 "dma_device_id": "system", 00:09:58.959 "dma_device_type": 1 00:09:58.959 }, 00:09:58.959 { 00:09:58.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.959 "dma_device_type": 2 00:09:58.959 } 00:09:58.959 ], 00:09:58.959 "driver_specific": {} 00:09:58.959 } 00:09:58.959 ] 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.219 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.219 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.219 "name": "Existed_Raid", 00:09:59.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.219 "strip_size_kb": 64, 00:09:59.219 "state": "configuring", 00:09:59.219 "raid_level": "raid0", 00:09:59.219 "superblock": false, 00:09:59.219 "num_base_bdevs": 4, 00:09:59.219 "num_base_bdevs_discovered": 2, 00:09:59.219 "num_base_bdevs_operational": 4, 00:09:59.219 "base_bdevs_list": [ 00:09:59.219 { 00:09:59.219 "name": "BaseBdev1", 00:09:59.219 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:09:59.219 "is_configured": true, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 65536 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "name": "BaseBdev2", 00:09:59.219 "uuid": "e4974c72-d9bf-479d-9b57-28c7180cb87b", 00:09:59.219 "is_configured": true, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 65536 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "name": "BaseBdev3", 00:09:59.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.219 "is_configured": false, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 0 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "name": "BaseBdev4", 00:09:59.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.219 "is_configured": false, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 0 00:09:59.219 } 00:09:59.219 ] 00:09:59.219 }' 00:09:59.219 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.219 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 [2024-11-26 21:17:17.572586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.479 BaseBdev3 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 [ 00:09:59.479 { 00:09:59.479 "name": "BaseBdev3", 00:09:59.479 "aliases": [ 00:09:59.479 "76051f4b-be4a-4182-adca-4ad64cf83624" 00:09:59.479 ], 00:09:59.479 "product_name": "Malloc disk", 00:09:59.479 "block_size": 512, 00:09:59.479 "num_blocks": 65536, 00:09:59.479 "uuid": "76051f4b-be4a-4182-adca-4ad64cf83624", 00:09:59.479 "assigned_rate_limits": { 00:09:59.479 "rw_ios_per_sec": 0, 00:09:59.479 "rw_mbytes_per_sec": 0, 00:09:59.479 "r_mbytes_per_sec": 0, 00:09:59.479 "w_mbytes_per_sec": 0 00:09:59.479 }, 00:09:59.479 "claimed": true, 00:09:59.479 "claim_type": "exclusive_write", 00:09:59.479 "zoned": false, 00:09:59.479 "supported_io_types": { 00:09:59.479 "read": true, 00:09:59.479 "write": true, 00:09:59.479 "unmap": true, 00:09:59.479 "flush": true, 00:09:59.479 "reset": true, 00:09:59.479 "nvme_admin": false, 00:09:59.479 "nvme_io": false, 00:09:59.479 "nvme_io_md": false, 00:09:59.479 "write_zeroes": true, 00:09:59.479 "zcopy": true, 00:09:59.479 "get_zone_info": false, 00:09:59.479 "zone_management": false, 00:09:59.479 "zone_append": false, 00:09:59.479 "compare": false, 00:09:59.479 "compare_and_write": false, 00:09:59.479 "abort": true, 00:09:59.479 "seek_hole": false, 00:09:59.479 "seek_data": false, 00:09:59.479 "copy": true, 00:09:59.479 "nvme_iov_md": false 00:09:59.479 }, 00:09:59.479 "memory_domains": [ 00:09:59.479 { 00:09:59.479 "dma_device_id": "system", 00:09:59.479 "dma_device_type": 1 00:09:59.479 }, 00:09:59.479 { 00:09:59.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.479 "dma_device_type": 2 00:09:59.479 } 00:09:59.479 ], 00:09:59.479 "driver_specific": {} 00:09:59.479 } 00:09:59.479 ] 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.740 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.740 "name": "Existed_Raid", 00:09:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.740 "strip_size_kb": 64, 00:09:59.740 "state": "configuring", 00:09:59.740 "raid_level": "raid0", 00:09:59.740 "superblock": false, 00:09:59.740 "num_base_bdevs": 4, 00:09:59.740 "num_base_bdevs_discovered": 3, 00:09:59.740 "num_base_bdevs_operational": 4, 00:09:59.740 "base_bdevs_list": [ 00:09:59.740 { 00:09:59.740 "name": "BaseBdev1", 00:09:59.740 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:09:59.740 "is_configured": true, 00:09:59.740 "data_offset": 0, 00:09:59.740 "data_size": 65536 00:09:59.740 }, 00:09:59.740 { 00:09:59.740 "name": "BaseBdev2", 00:09:59.740 "uuid": "e4974c72-d9bf-479d-9b57-28c7180cb87b", 00:09:59.740 "is_configured": true, 00:09:59.740 "data_offset": 0, 00:09:59.740 "data_size": 65536 00:09:59.740 }, 00:09:59.740 { 00:09:59.740 "name": "BaseBdev3", 00:09:59.740 "uuid": "76051f4b-be4a-4182-adca-4ad64cf83624", 00:09:59.740 "is_configured": true, 00:09:59.740 "data_offset": 0, 00:09:59.740 "data_size": 65536 00:09:59.740 }, 00:09:59.740 { 00:09:59.740 "name": "BaseBdev4", 00:09:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.740 "is_configured": false, 00:09:59.740 "data_offset": 0, 00:09:59.740 "data_size": 0 00:09:59.740 } 00:09:59.740 ] 00:09:59.740 }' 00:09:59.740 21:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.740 21:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 [2024-11-26 21:17:18.053290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.000 [2024-11-26 21:17:18.053335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.000 [2024-11-26 21:17:18.053343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:00.000 [2024-11-26 21:17:18.053594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:00.000 [2024-11-26 21:17:18.053741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.000 [2024-11-26 21:17:18.053752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.000 [2024-11-26 21:17:18.054013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.000 BaseBdev4 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.000 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 [ 00:10:00.000 { 00:10:00.000 "name": "BaseBdev4", 00:10:00.000 "aliases": [ 00:10:00.000 "1ddcf106-5035-4f27-9b32-94616d962bc7" 00:10:00.000 ], 00:10:00.000 "product_name": "Malloc disk", 00:10:00.000 "block_size": 512, 00:10:00.001 "num_blocks": 65536, 00:10:00.001 "uuid": "1ddcf106-5035-4f27-9b32-94616d962bc7", 00:10:00.001 "assigned_rate_limits": { 00:10:00.001 "rw_ios_per_sec": 0, 00:10:00.001 "rw_mbytes_per_sec": 0, 00:10:00.001 "r_mbytes_per_sec": 0, 00:10:00.001 "w_mbytes_per_sec": 0 00:10:00.001 }, 00:10:00.001 "claimed": true, 00:10:00.001 "claim_type": "exclusive_write", 00:10:00.001 "zoned": false, 00:10:00.001 "supported_io_types": { 00:10:00.001 "read": true, 00:10:00.001 "write": true, 00:10:00.001 "unmap": true, 00:10:00.001 "flush": true, 00:10:00.001 "reset": true, 00:10:00.001 "nvme_admin": false, 00:10:00.001 "nvme_io": false, 00:10:00.001 "nvme_io_md": false, 00:10:00.001 "write_zeroes": true, 00:10:00.001 "zcopy": true, 00:10:00.001 "get_zone_info": false, 00:10:00.001 "zone_management": false, 00:10:00.001 "zone_append": false, 00:10:00.001 "compare": false, 00:10:00.001 "compare_and_write": false, 00:10:00.001 "abort": true, 00:10:00.001 "seek_hole": false, 00:10:00.001 "seek_data": false, 00:10:00.001 "copy": true, 00:10:00.001 "nvme_iov_md": false 00:10:00.001 }, 00:10:00.001 "memory_domains": [ 00:10:00.001 { 00:10:00.001 "dma_device_id": "system", 00:10:00.001 "dma_device_type": 1 00:10:00.001 }, 00:10:00.001 { 00:10:00.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.001 "dma_device_type": 2 00:10:00.001 } 00:10:00.001 ], 00:10:00.001 "driver_specific": {} 00:10:00.001 } 00:10:00.001 ] 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.001 "name": "Existed_Raid", 00:10:00.001 "uuid": "fa8699fd-fd44-41ca-8eb9-02fbc5a6d28d", 00:10:00.001 "strip_size_kb": 64, 00:10:00.001 "state": "online", 00:10:00.001 "raid_level": "raid0", 00:10:00.001 "superblock": false, 00:10:00.001 "num_base_bdevs": 4, 00:10:00.001 "num_base_bdevs_discovered": 4, 00:10:00.001 "num_base_bdevs_operational": 4, 00:10:00.001 "base_bdevs_list": [ 00:10:00.001 { 00:10:00.001 "name": "BaseBdev1", 00:10:00.001 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:10:00.001 "is_configured": true, 00:10:00.001 "data_offset": 0, 00:10:00.001 "data_size": 65536 00:10:00.001 }, 00:10:00.001 { 00:10:00.001 "name": "BaseBdev2", 00:10:00.001 "uuid": "e4974c72-d9bf-479d-9b57-28c7180cb87b", 00:10:00.001 "is_configured": true, 00:10:00.001 "data_offset": 0, 00:10:00.001 "data_size": 65536 00:10:00.001 }, 00:10:00.001 { 00:10:00.001 "name": "BaseBdev3", 00:10:00.001 "uuid": "76051f4b-be4a-4182-adca-4ad64cf83624", 00:10:00.001 "is_configured": true, 00:10:00.001 "data_offset": 0, 00:10:00.001 "data_size": 65536 00:10:00.001 }, 00:10:00.001 { 00:10:00.001 "name": "BaseBdev4", 00:10:00.001 "uuid": "1ddcf106-5035-4f27-9b32-94616d962bc7", 00:10:00.001 "is_configured": true, 00:10:00.001 "data_offset": 0, 00:10:00.001 "data_size": 65536 00:10:00.001 } 00:10:00.001 ] 00:10:00.001 }' 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.001 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.572 [2024-11-26 21:17:18.489023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.572 "name": "Existed_Raid", 00:10:00.572 "aliases": [ 00:10:00.572 "fa8699fd-fd44-41ca-8eb9-02fbc5a6d28d" 00:10:00.572 ], 00:10:00.572 "product_name": "Raid Volume", 00:10:00.572 "block_size": 512, 00:10:00.572 "num_blocks": 262144, 00:10:00.572 "uuid": "fa8699fd-fd44-41ca-8eb9-02fbc5a6d28d", 00:10:00.572 "assigned_rate_limits": { 00:10:00.572 "rw_ios_per_sec": 0, 00:10:00.572 "rw_mbytes_per_sec": 0, 00:10:00.572 "r_mbytes_per_sec": 0, 00:10:00.572 "w_mbytes_per_sec": 0 00:10:00.572 }, 00:10:00.572 "claimed": false, 00:10:00.572 "zoned": false, 00:10:00.572 "supported_io_types": { 00:10:00.572 "read": true, 00:10:00.572 "write": true, 00:10:00.572 "unmap": true, 00:10:00.572 "flush": true, 00:10:00.572 "reset": true, 00:10:00.572 "nvme_admin": false, 00:10:00.572 "nvme_io": false, 00:10:00.572 "nvme_io_md": false, 00:10:00.572 "write_zeroes": true, 00:10:00.572 "zcopy": false, 00:10:00.572 "get_zone_info": false, 00:10:00.572 "zone_management": false, 00:10:00.572 "zone_append": false, 00:10:00.572 "compare": false, 00:10:00.572 "compare_and_write": false, 00:10:00.572 "abort": false, 00:10:00.572 "seek_hole": false, 00:10:00.572 "seek_data": false, 00:10:00.572 "copy": false, 00:10:00.572 "nvme_iov_md": false 00:10:00.572 }, 00:10:00.572 "memory_domains": [ 00:10:00.572 { 00:10:00.572 "dma_device_id": "system", 00:10:00.572 "dma_device_type": 1 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.572 "dma_device_type": 2 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "system", 00:10:00.572 "dma_device_type": 1 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.572 "dma_device_type": 2 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "system", 00:10:00.572 "dma_device_type": 1 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.572 "dma_device_type": 2 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "system", 00:10:00.572 "dma_device_type": 1 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.572 "dma_device_type": 2 00:10:00.572 } 00:10:00.572 ], 00:10:00.572 "driver_specific": { 00:10:00.572 "raid": { 00:10:00.572 "uuid": "fa8699fd-fd44-41ca-8eb9-02fbc5a6d28d", 00:10:00.572 "strip_size_kb": 64, 00:10:00.572 "state": "online", 00:10:00.572 "raid_level": "raid0", 00:10:00.572 "superblock": false, 00:10:00.572 "num_base_bdevs": 4, 00:10:00.572 "num_base_bdevs_discovered": 4, 00:10:00.572 "num_base_bdevs_operational": 4, 00:10:00.572 "base_bdevs_list": [ 00:10:00.572 { 00:10:00.572 "name": "BaseBdev1", 00:10:00.572 "uuid": "262f4629-6b94-43cc-845d-28b1e9502d0f", 00:10:00.572 "is_configured": true, 00:10:00.572 "data_offset": 0, 00:10:00.572 "data_size": 65536 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "name": "BaseBdev2", 00:10:00.572 "uuid": "e4974c72-d9bf-479d-9b57-28c7180cb87b", 00:10:00.572 "is_configured": true, 00:10:00.572 "data_offset": 0, 00:10:00.572 "data_size": 65536 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "name": "BaseBdev3", 00:10:00.572 "uuid": "76051f4b-be4a-4182-adca-4ad64cf83624", 00:10:00.572 "is_configured": true, 00:10:00.572 "data_offset": 0, 00:10:00.572 "data_size": 65536 00:10:00.572 }, 00:10:00.572 { 00:10:00.572 "name": "BaseBdev4", 00:10:00.572 "uuid": "1ddcf106-5035-4f27-9b32-94616d962bc7", 00:10:00.572 "is_configured": true, 00:10:00.572 "data_offset": 0, 00:10:00.572 "data_size": 65536 00:10:00.572 } 00:10:00.572 ] 00:10:00.572 } 00:10:00.572 } 00:10:00.572 }' 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:00.572 BaseBdev2 00:10:00.572 BaseBdev3 00:10:00.572 BaseBdev4' 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.572 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.573 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 [2024-11-26 21:17:18.792207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.833 [2024-11-26 21:17:18.792241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.833 [2024-11-26 21:17:18.792294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.833 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.833 "name": "Existed_Raid", 00:10:00.833 "uuid": "fa8699fd-fd44-41ca-8eb9-02fbc5a6d28d", 00:10:00.833 "strip_size_kb": 64, 00:10:00.833 "state": "offline", 00:10:00.833 "raid_level": "raid0", 00:10:00.833 "superblock": false, 00:10:00.833 "num_base_bdevs": 4, 00:10:00.833 "num_base_bdevs_discovered": 3, 00:10:00.833 "num_base_bdevs_operational": 3, 00:10:00.833 "base_bdevs_list": [ 00:10:00.833 { 00:10:00.833 "name": null, 00:10:00.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.833 "is_configured": false, 00:10:00.833 "data_offset": 0, 00:10:00.833 "data_size": 65536 00:10:00.833 }, 00:10:00.833 { 00:10:00.833 "name": "BaseBdev2", 00:10:00.833 "uuid": "e4974c72-d9bf-479d-9b57-28c7180cb87b", 00:10:00.833 "is_configured": true, 00:10:00.833 "data_offset": 0, 00:10:00.833 "data_size": 65536 00:10:00.833 }, 00:10:00.833 { 00:10:00.833 "name": "BaseBdev3", 00:10:00.833 "uuid": "76051f4b-be4a-4182-adca-4ad64cf83624", 00:10:00.833 "is_configured": true, 00:10:00.833 "data_offset": 0, 00:10:00.833 "data_size": 65536 00:10:00.833 }, 00:10:00.833 { 00:10:00.833 "name": "BaseBdev4", 00:10:00.833 "uuid": "1ddcf106-5035-4f27-9b32-94616d962bc7", 00:10:00.834 "is_configured": true, 00:10:00.834 "data_offset": 0, 00:10:00.834 "data_size": 65536 00:10:00.834 } 00:10:00.834 ] 00:10:00.834 }' 00:10:00.834 21:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.834 21:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.403 [2024-11-26 21:17:19.362208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.403 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.403 [2024-11-26 21:17:19.510139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.663 [2024-11-26 21:17:19.656888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:01.663 [2024-11-26 21:17:19.657033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.663 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.923 BaseBdev2 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.923 [ 00:10:01.923 { 00:10:01.923 "name": "BaseBdev2", 00:10:01.923 "aliases": [ 00:10:01.923 "92548562-378e-4ec9-8100-02358d40669d" 00:10:01.923 ], 00:10:01.923 "product_name": "Malloc disk", 00:10:01.923 "block_size": 512, 00:10:01.923 "num_blocks": 65536, 00:10:01.923 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:01.923 "assigned_rate_limits": { 00:10:01.923 "rw_ios_per_sec": 0, 00:10:01.923 "rw_mbytes_per_sec": 0, 00:10:01.923 "r_mbytes_per_sec": 0, 00:10:01.923 "w_mbytes_per_sec": 0 00:10:01.923 }, 00:10:01.923 "claimed": false, 00:10:01.923 "zoned": false, 00:10:01.923 "supported_io_types": { 00:10:01.923 "read": true, 00:10:01.923 "write": true, 00:10:01.923 "unmap": true, 00:10:01.923 "flush": true, 00:10:01.923 "reset": true, 00:10:01.923 "nvme_admin": false, 00:10:01.923 "nvme_io": false, 00:10:01.923 "nvme_io_md": false, 00:10:01.923 "write_zeroes": true, 00:10:01.923 "zcopy": true, 00:10:01.923 "get_zone_info": false, 00:10:01.923 "zone_management": false, 00:10:01.923 "zone_append": false, 00:10:01.923 "compare": false, 00:10:01.923 "compare_and_write": false, 00:10:01.923 "abort": true, 00:10:01.923 "seek_hole": false, 00:10:01.923 "seek_data": false, 00:10:01.923 "copy": true, 00:10:01.923 "nvme_iov_md": false 00:10:01.923 }, 00:10:01.923 "memory_domains": [ 00:10:01.923 { 00:10:01.923 "dma_device_id": "system", 00:10:01.923 "dma_device_type": 1 00:10:01.923 }, 00:10:01.923 { 00:10:01.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.923 "dma_device_type": 2 00:10:01.923 } 00:10:01.923 ], 00:10:01.923 "driver_specific": {} 00:10:01.923 } 00:10:01.923 ] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.923 BaseBdev3 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.923 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.924 [ 00:10:01.924 { 00:10:01.924 "name": "BaseBdev3", 00:10:01.924 "aliases": [ 00:10:01.924 "986a57c2-f888-4400-8728-6dbcea41f05f" 00:10:01.924 ], 00:10:01.924 "product_name": "Malloc disk", 00:10:01.924 "block_size": 512, 00:10:01.924 "num_blocks": 65536, 00:10:01.924 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:01.924 "assigned_rate_limits": { 00:10:01.924 "rw_ios_per_sec": 0, 00:10:01.924 "rw_mbytes_per_sec": 0, 00:10:01.924 "r_mbytes_per_sec": 0, 00:10:01.924 "w_mbytes_per_sec": 0 00:10:01.924 }, 00:10:01.924 "claimed": false, 00:10:01.924 "zoned": false, 00:10:01.924 "supported_io_types": { 00:10:01.924 "read": true, 00:10:01.924 "write": true, 00:10:01.924 "unmap": true, 00:10:01.924 "flush": true, 00:10:01.924 "reset": true, 00:10:01.924 "nvme_admin": false, 00:10:01.924 "nvme_io": false, 00:10:01.924 "nvme_io_md": false, 00:10:01.924 "write_zeroes": true, 00:10:01.924 "zcopy": true, 00:10:01.924 "get_zone_info": false, 00:10:01.924 "zone_management": false, 00:10:01.924 "zone_append": false, 00:10:01.924 "compare": false, 00:10:01.924 "compare_and_write": false, 00:10:01.924 "abort": true, 00:10:01.924 "seek_hole": false, 00:10:01.924 "seek_data": false, 00:10:01.924 "copy": true, 00:10:01.924 "nvme_iov_md": false 00:10:01.924 }, 00:10:01.924 "memory_domains": [ 00:10:01.924 { 00:10:01.924 "dma_device_id": "system", 00:10:01.924 "dma_device_type": 1 00:10:01.924 }, 00:10:01.924 { 00:10:01.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.924 "dma_device_type": 2 00:10:01.924 } 00:10:01.924 ], 00:10:01.924 "driver_specific": {} 00:10:01.924 } 00:10:01.924 ] 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.924 BaseBdev4 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.924 21:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.924 [ 00:10:01.924 { 00:10:01.924 "name": "BaseBdev4", 00:10:01.924 "aliases": [ 00:10:01.924 "35a6a022-c7f8-41e0-b596-45d243ac3f87" 00:10:01.924 ], 00:10:01.924 "product_name": "Malloc disk", 00:10:01.924 "block_size": 512, 00:10:01.924 "num_blocks": 65536, 00:10:01.924 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:01.924 "assigned_rate_limits": { 00:10:01.924 "rw_ios_per_sec": 0, 00:10:01.924 "rw_mbytes_per_sec": 0, 00:10:01.924 "r_mbytes_per_sec": 0, 00:10:01.924 "w_mbytes_per_sec": 0 00:10:01.924 }, 00:10:01.924 "claimed": false, 00:10:01.924 "zoned": false, 00:10:01.924 "supported_io_types": { 00:10:01.924 "read": true, 00:10:01.924 "write": true, 00:10:01.924 "unmap": true, 00:10:01.924 "flush": true, 00:10:01.924 "reset": true, 00:10:01.924 "nvme_admin": false, 00:10:01.924 "nvme_io": false, 00:10:01.924 "nvme_io_md": false, 00:10:01.924 "write_zeroes": true, 00:10:01.924 "zcopy": true, 00:10:01.924 "get_zone_info": false, 00:10:01.924 "zone_management": false, 00:10:01.924 "zone_append": false, 00:10:01.924 "compare": false, 00:10:01.924 "compare_and_write": false, 00:10:01.924 "abort": true, 00:10:01.924 "seek_hole": false, 00:10:01.924 "seek_data": false, 00:10:01.924 "copy": true, 00:10:01.924 "nvme_iov_md": false 00:10:01.924 }, 00:10:01.924 "memory_domains": [ 00:10:01.924 { 00:10:01.924 "dma_device_id": "system", 00:10:01.924 "dma_device_type": 1 00:10:01.924 }, 00:10:01.924 { 00:10:01.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.924 "dma_device_type": 2 00:10:01.924 } 00:10:01.924 ], 00:10:01.924 "driver_specific": {} 00:10:01.924 } 00:10:01.924 ] 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.924 [2024-11-26 21:17:20.020822] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.924 [2024-11-26 21:17:20.020975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.924 [2024-11-26 21:17:20.021030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.924 [2024-11-26 21:17:20.022878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.924 [2024-11-26 21:17:20.022983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.924 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.924 "name": "Existed_Raid", 00:10:01.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.924 "strip_size_kb": 64, 00:10:01.924 "state": "configuring", 00:10:01.924 "raid_level": "raid0", 00:10:01.924 "superblock": false, 00:10:01.925 "num_base_bdevs": 4, 00:10:01.925 "num_base_bdevs_discovered": 3, 00:10:01.925 "num_base_bdevs_operational": 4, 00:10:01.925 "base_bdevs_list": [ 00:10:01.925 { 00:10:01.925 "name": "BaseBdev1", 00:10:01.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.925 "is_configured": false, 00:10:01.925 "data_offset": 0, 00:10:01.925 "data_size": 0 00:10:01.925 }, 00:10:01.925 { 00:10:01.925 "name": "BaseBdev2", 00:10:01.925 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:01.925 "is_configured": true, 00:10:01.925 "data_offset": 0, 00:10:01.925 "data_size": 65536 00:10:01.925 }, 00:10:01.925 { 00:10:01.925 "name": "BaseBdev3", 00:10:01.925 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:01.925 "is_configured": true, 00:10:01.925 "data_offset": 0, 00:10:01.925 "data_size": 65536 00:10:01.925 }, 00:10:01.925 { 00:10:01.925 "name": "BaseBdev4", 00:10:01.925 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:01.925 "is_configured": true, 00:10:01.925 "data_offset": 0, 00:10:01.925 "data_size": 65536 00:10:01.925 } 00:10:01.925 ] 00:10:01.925 }' 00:10:01.925 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.925 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.493 [2024-11-26 21:17:20.468071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.493 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.494 "name": "Existed_Raid", 00:10:02.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.494 "strip_size_kb": 64, 00:10:02.494 "state": "configuring", 00:10:02.494 "raid_level": "raid0", 00:10:02.494 "superblock": false, 00:10:02.494 "num_base_bdevs": 4, 00:10:02.494 "num_base_bdevs_discovered": 2, 00:10:02.494 "num_base_bdevs_operational": 4, 00:10:02.494 "base_bdevs_list": [ 00:10:02.494 { 00:10:02.494 "name": "BaseBdev1", 00:10:02.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.494 "is_configured": false, 00:10:02.494 "data_offset": 0, 00:10:02.494 "data_size": 0 00:10:02.494 }, 00:10:02.494 { 00:10:02.494 "name": null, 00:10:02.494 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:02.494 "is_configured": false, 00:10:02.494 "data_offset": 0, 00:10:02.494 "data_size": 65536 00:10:02.494 }, 00:10:02.494 { 00:10:02.494 "name": "BaseBdev3", 00:10:02.494 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:02.494 "is_configured": true, 00:10:02.494 "data_offset": 0, 00:10:02.494 "data_size": 65536 00:10:02.494 }, 00:10:02.494 { 00:10:02.494 "name": "BaseBdev4", 00:10:02.494 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:02.494 "is_configured": true, 00:10:02.494 "data_offset": 0, 00:10:02.494 "data_size": 65536 00:10:02.494 } 00:10:02.494 ] 00:10:02.494 }' 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.494 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.063 [2024-11-26 21:17:20.995355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.063 BaseBdev1 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.063 21:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.063 [ 00:10:03.063 { 00:10:03.063 "name": "BaseBdev1", 00:10:03.063 "aliases": [ 00:10:03.063 "5761d1df-efb8-450d-833e-13328071afa8" 00:10:03.063 ], 00:10:03.063 "product_name": "Malloc disk", 00:10:03.063 "block_size": 512, 00:10:03.063 "num_blocks": 65536, 00:10:03.063 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:03.063 "assigned_rate_limits": { 00:10:03.063 "rw_ios_per_sec": 0, 00:10:03.063 "rw_mbytes_per_sec": 0, 00:10:03.063 "r_mbytes_per_sec": 0, 00:10:03.063 "w_mbytes_per_sec": 0 00:10:03.063 }, 00:10:03.063 "claimed": true, 00:10:03.063 "claim_type": "exclusive_write", 00:10:03.063 "zoned": false, 00:10:03.063 "supported_io_types": { 00:10:03.063 "read": true, 00:10:03.063 "write": true, 00:10:03.063 "unmap": true, 00:10:03.063 "flush": true, 00:10:03.063 "reset": true, 00:10:03.063 "nvme_admin": false, 00:10:03.063 "nvme_io": false, 00:10:03.063 "nvme_io_md": false, 00:10:03.063 "write_zeroes": true, 00:10:03.063 "zcopy": true, 00:10:03.063 "get_zone_info": false, 00:10:03.063 "zone_management": false, 00:10:03.063 "zone_append": false, 00:10:03.063 "compare": false, 00:10:03.063 "compare_and_write": false, 00:10:03.063 "abort": true, 00:10:03.063 "seek_hole": false, 00:10:03.063 "seek_data": false, 00:10:03.063 "copy": true, 00:10:03.063 "nvme_iov_md": false 00:10:03.063 }, 00:10:03.063 "memory_domains": [ 00:10:03.063 { 00:10:03.063 "dma_device_id": "system", 00:10:03.063 "dma_device_type": 1 00:10:03.063 }, 00:10:03.063 { 00:10:03.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.063 "dma_device_type": 2 00:10:03.063 } 00:10:03.063 ], 00:10:03.063 "driver_specific": {} 00:10:03.063 } 00:10:03.063 ] 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.063 "name": "Existed_Raid", 00:10:03.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.063 "strip_size_kb": 64, 00:10:03.063 "state": "configuring", 00:10:03.063 "raid_level": "raid0", 00:10:03.063 "superblock": false, 00:10:03.063 "num_base_bdevs": 4, 00:10:03.063 "num_base_bdevs_discovered": 3, 00:10:03.063 "num_base_bdevs_operational": 4, 00:10:03.063 "base_bdevs_list": [ 00:10:03.063 { 00:10:03.063 "name": "BaseBdev1", 00:10:03.063 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:03.063 "is_configured": true, 00:10:03.063 "data_offset": 0, 00:10:03.063 "data_size": 65536 00:10:03.063 }, 00:10:03.063 { 00:10:03.063 "name": null, 00:10:03.063 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:03.063 "is_configured": false, 00:10:03.063 "data_offset": 0, 00:10:03.063 "data_size": 65536 00:10:03.063 }, 00:10:03.063 { 00:10:03.063 "name": "BaseBdev3", 00:10:03.063 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:03.063 "is_configured": true, 00:10:03.063 "data_offset": 0, 00:10:03.063 "data_size": 65536 00:10:03.063 }, 00:10:03.063 { 00:10:03.063 "name": "BaseBdev4", 00:10:03.063 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:03.063 "is_configured": true, 00:10:03.063 "data_offset": 0, 00:10:03.063 "data_size": 65536 00:10:03.063 } 00:10:03.063 ] 00:10:03.063 }' 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.063 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.323 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.323 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.323 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.323 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.584 [2024-11-26 21:17:21.510650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.584 "name": "Existed_Raid", 00:10:03.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.584 "strip_size_kb": 64, 00:10:03.584 "state": "configuring", 00:10:03.584 "raid_level": "raid0", 00:10:03.584 "superblock": false, 00:10:03.584 "num_base_bdevs": 4, 00:10:03.584 "num_base_bdevs_discovered": 2, 00:10:03.584 "num_base_bdevs_operational": 4, 00:10:03.584 "base_bdevs_list": [ 00:10:03.584 { 00:10:03.584 "name": "BaseBdev1", 00:10:03.584 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:03.584 "is_configured": true, 00:10:03.584 "data_offset": 0, 00:10:03.584 "data_size": 65536 00:10:03.584 }, 00:10:03.584 { 00:10:03.584 "name": null, 00:10:03.584 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:03.584 "is_configured": false, 00:10:03.584 "data_offset": 0, 00:10:03.584 "data_size": 65536 00:10:03.584 }, 00:10:03.584 { 00:10:03.584 "name": null, 00:10:03.584 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:03.584 "is_configured": false, 00:10:03.584 "data_offset": 0, 00:10:03.584 "data_size": 65536 00:10:03.584 }, 00:10:03.584 { 00:10:03.584 "name": "BaseBdev4", 00:10:03.584 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:03.584 "is_configured": true, 00:10:03.584 "data_offset": 0, 00:10:03.584 "data_size": 65536 00:10:03.584 } 00:10:03.584 ] 00:10:03.584 }' 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.584 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.844 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.844 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.844 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.845 [2024-11-26 21:17:21.977843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.845 21:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.105 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.105 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.105 "name": "Existed_Raid", 00:10:04.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.105 "strip_size_kb": 64, 00:10:04.105 "state": "configuring", 00:10:04.105 "raid_level": "raid0", 00:10:04.105 "superblock": false, 00:10:04.105 "num_base_bdevs": 4, 00:10:04.105 "num_base_bdevs_discovered": 3, 00:10:04.105 "num_base_bdevs_operational": 4, 00:10:04.105 "base_bdevs_list": [ 00:10:04.105 { 00:10:04.105 "name": "BaseBdev1", 00:10:04.105 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:04.105 "is_configured": true, 00:10:04.105 "data_offset": 0, 00:10:04.105 "data_size": 65536 00:10:04.105 }, 00:10:04.105 { 00:10:04.105 "name": null, 00:10:04.105 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:04.105 "is_configured": false, 00:10:04.105 "data_offset": 0, 00:10:04.105 "data_size": 65536 00:10:04.105 }, 00:10:04.105 { 00:10:04.105 "name": "BaseBdev3", 00:10:04.105 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:04.105 "is_configured": true, 00:10:04.105 "data_offset": 0, 00:10:04.105 "data_size": 65536 00:10:04.105 }, 00:10:04.105 { 00:10:04.105 "name": "BaseBdev4", 00:10:04.105 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:04.105 "is_configured": true, 00:10:04.105 "data_offset": 0, 00:10:04.105 "data_size": 65536 00:10:04.105 } 00:10:04.105 ] 00:10:04.105 }' 00:10:04.105 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.105 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.365 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.365 [2024-11-26 21:17:22.489051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.624 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.624 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.624 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.625 "name": "Existed_Raid", 00:10:04.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.625 "strip_size_kb": 64, 00:10:04.625 "state": "configuring", 00:10:04.625 "raid_level": "raid0", 00:10:04.625 "superblock": false, 00:10:04.625 "num_base_bdevs": 4, 00:10:04.625 "num_base_bdevs_discovered": 2, 00:10:04.625 "num_base_bdevs_operational": 4, 00:10:04.625 "base_bdevs_list": [ 00:10:04.625 { 00:10:04.625 "name": null, 00:10:04.625 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:04.625 "is_configured": false, 00:10:04.625 "data_offset": 0, 00:10:04.625 "data_size": 65536 00:10:04.625 }, 00:10:04.625 { 00:10:04.625 "name": null, 00:10:04.625 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:04.625 "is_configured": false, 00:10:04.625 "data_offset": 0, 00:10:04.625 "data_size": 65536 00:10:04.625 }, 00:10:04.625 { 00:10:04.625 "name": "BaseBdev3", 00:10:04.625 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:04.625 "is_configured": true, 00:10:04.625 "data_offset": 0, 00:10:04.625 "data_size": 65536 00:10:04.625 }, 00:10:04.625 { 00:10:04.625 "name": "BaseBdev4", 00:10:04.625 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:04.625 "is_configured": true, 00:10:04.625 "data_offset": 0, 00:10:04.625 "data_size": 65536 00:10:04.625 } 00:10:04.625 ] 00:10:04.625 }' 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.625 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.884 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.884 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.884 21:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.884 21:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.884 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.144 [2024-11-26 21:17:23.049802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.144 "name": "Existed_Raid", 00:10:05.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.144 "strip_size_kb": 64, 00:10:05.144 "state": "configuring", 00:10:05.144 "raid_level": "raid0", 00:10:05.144 "superblock": false, 00:10:05.144 "num_base_bdevs": 4, 00:10:05.144 "num_base_bdevs_discovered": 3, 00:10:05.144 "num_base_bdevs_operational": 4, 00:10:05.144 "base_bdevs_list": [ 00:10:05.144 { 00:10:05.144 "name": null, 00:10:05.144 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:05.144 "is_configured": false, 00:10:05.144 "data_offset": 0, 00:10:05.144 "data_size": 65536 00:10:05.144 }, 00:10:05.144 { 00:10:05.144 "name": "BaseBdev2", 00:10:05.144 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:05.144 "is_configured": true, 00:10:05.144 "data_offset": 0, 00:10:05.144 "data_size": 65536 00:10:05.144 }, 00:10:05.144 { 00:10:05.144 "name": "BaseBdev3", 00:10:05.144 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:05.144 "is_configured": true, 00:10:05.144 "data_offset": 0, 00:10:05.144 "data_size": 65536 00:10:05.144 }, 00:10:05.144 { 00:10:05.144 "name": "BaseBdev4", 00:10:05.144 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:05.144 "is_configured": true, 00:10:05.144 "data_offset": 0, 00:10:05.144 "data_size": 65536 00:10:05.144 } 00:10:05.144 ] 00:10:05.144 }' 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.144 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.404 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5761d1df-efb8-450d-833e-13328071afa8 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.664 [2024-11-26 21:17:23.627621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:05.664 [2024-11-26 21:17:23.627667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.664 [2024-11-26 21:17:23.627674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:05.664 [2024-11-26 21:17:23.627924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:05.664 [2024-11-26 21:17:23.628101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.664 [2024-11-26 21:17:23.628112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:05.664 NewBaseBdev 00:10:05.664 [2024-11-26 21:17:23.628361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.664 [ 00:10:05.664 { 00:10:05.664 "name": "NewBaseBdev", 00:10:05.664 "aliases": [ 00:10:05.664 "5761d1df-efb8-450d-833e-13328071afa8" 00:10:05.664 ], 00:10:05.664 "product_name": "Malloc disk", 00:10:05.664 "block_size": 512, 00:10:05.664 "num_blocks": 65536, 00:10:05.664 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:05.664 "assigned_rate_limits": { 00:10:05.664 "rw_ios_per_sec": 0, 00:10:05.664 "rw_mbytes_per_sec": 0, 00:10:05.664 "r_mbytes_per_sec": 0, 00:10:05.664 "w_mbytes_per_sec": 0 00:10:05.664 }, 00:10:05.664 "claimed": true, 00:10:05.664 "claim_type": "exclusive_write", 00:10:05.664 "zoned": false, 00:10:05.664 "supported_io_types": { 00:10:05.664 "read": true, 00:10:05.664 "write": true, 00:10:05.664 "unmap": true, 00:10:05.664 "flush": true, 00:10:05.664 "reset": true, 00:10:05.664 "nvme_admin": false, 00:10:05.664 "nvme_io": false, 00:10:05.664 "nvme_io_md": false, 00:10:05.664 "write_zeroes": true, 00:10:05.664 "zcopy": true, 00:10:05.664 "get_zone_info": false, 00:10:05.664 "zone_management": false, 00:10:05.664 "zone_append": false, 00:10:05.664 "compare": false, 00:10:05.664 "compare_and_write": false, 00:10:05.664 "abort": true, 00:10:05.664 "seek_hole": false, 00:10:05.664 "seek_data": false, 00:10:05.664 "copy": true, 00:10:05.664 "nvme_iov_md": false 00:10:05.664 }, 00:10:05.664 "memory_domains": [ 00:10:05.664 { 00:10:05.664 "dma_device_id": "system", 00:10:05.664 "dma_device_type": 1 00:10:05.664 }, 00:10:05.664 { 00:10:05.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.664 "dma_device_type": 2 00:10:05.664 } 00:10:05.664 ], 00:10:05.664 "driver_specific": {} 00:10:05.664 } 00:10:05.664 ] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.664 "name": "Existed_Raid", 00:10:05.664 "uuid": "65327df5-9cf3-4a20-8666-e244e50a0a57", 00:10:05.664 "strip_size_kb": 64, 00:10:05.664 "state": "online", 00:10:05.664 "raid_level": "raid0", 00:10:05.664 "superblock": false, 00:10:05.664 "num_base_bdevs": 4, 00:10:05.664 "num_base_bdevs_discovered": 4, 00:10:05.664 "num_base_bdevs_operational": 4, 00:10:05.664 "base_bdevs_list": [ 00:10:05.664 { 00:10:05.664 "name": "NewBaseBdev", 00:10:05.664 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:05.664 "is_configured": true, 00:10:05.664 "data_offset": 0, 00:10:05.664 "data_size": 65536 00:10:05.664 }, 00:10:05.664 { 00:10:05.664 "name": "BaseBdev2", 00:10:05.664 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:05.664 "is_configured": true, 00:10:05.664 "data_offset": 0, 00:10:05.664 "data_size": 65536 00:10:05.664 }, 00:10:05.664 { 00:10:05.664 "name": "BaseBdev3", 00:10:05.664 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:05.664 "is_configured": true, 00:10:05.664 "data_offset": 0, 00:10:05.664 "data_size": 65536 00:10:05.664 }, 00:10:05.664 { 00:10:05.664 "name": "BaseBdev4", 00:10:05.664 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:05.664 "is_configured": true, 00:10:05.664 "data_offset": 0, 00:10:05.664 "data_size": 65536 00:10:05.664 } 00:10:05.664 ] 00:10:05.664 }' 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.664 21:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.925 [2024-11-26 21:17:24.047330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.925 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.185 "name": "Existed_Raid", 00:10:06.185 "aliases": [ 00:10:06.185 "65327df5-9cf3-4a20-8666-e244e50a0a57" 00:10:06.185 ], 00:10:06.185 "product_name": "Raid Volume", 00:10:06.185 "block_size": 512, 00:10:06.185 "num_blocks": 262144, 00:10:06.185 "uuid": "65327df5-9cf3-4a20-8666-e244e50a0a57", 00:10:06.185 "assigned_rate_limits": { 00:10:06.185 "rw_ios_per_sec": 0, 00:10:06.185 "rw_mbytes_per_sec": 0, 00:10:06.185 "r_mbytes_per_sec": 0, 00:10:06.185 "w_mbytes_per_sec": 0 00:10:06.185 }, 00:10:06.185 "claimed": false, 00:10:06.185 "zoned": false, 00:10:06.185 "supported_io_types": { 00:10:06.185 "read": true, 00:10:06.185 "write": true, 00:10:06.185 "unmap": true, 00:10:06.185 "flush": true, 00:10:06.185 "reset": true, 00:10:06.185 "nvme_admin": false, 00:10:06.185 "nvme_io": false, 00:10:06.185 "nvme_io_md": false, 00:10:06.185 "write_zeroes": true, 00:10:06.185 "zcopy": false, 00:10:06.185 "get_zone_info": false, 00:10:06.185 "zone_management": false, 00:10:06.185 "zone_append": false, 00:10:06.185 "compare": false, 00:10:06.185 "compare_and_write": false, 00:10:06.185 "abort": false, 00:10:06.185 "seek_hole": false, 00:10:06.185 "seek_data": false, 00:10:06.185 "copy": false, 00:10:06.185 "nvme_iov_md": false 00:10:06.185 }, 00:10:06.185 "memory_domains": [ 00:10:06.185 { 00:10:06.185 "dma_device_id": "system", 00:10:06.185 "dma_device_type": 1 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.185 "dma_device_type": 2 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "system", 00:10:06.185 "dma_device_type": 1 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.185 "dma_device_type": 2 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "system", 00:10:06.185 "dma_device_type": 1 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.185 "dma_device_type": 2 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "system", 00:10:06.185 "dma_device_type": 1 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.185 "dma_device_type": 2 00:10:06.185 } 00:10:06.185 ], 00:10:06.185 "driver_specific": { 00:10:06.185 "raid": { 00:10:06.185 "uuid": "65327df5-9cf3-4a20-8666-e244e50a0a57", 00:10:06.185 "strip_size_kb": 64, 00:10:06.185 "state": "online", 00:10:06.185 "raid_level": "raid0", 00:10:06.185 "superblock": false, 00:10:06.185 "num_base_bdevs": 4, 00:10:06.185 "num_base_bdevs_discovered": 4, 00:10:06.185 "num_base_bdevs_operational": 4, 00:10:06.185 "base_bdevs_list": [ 00:10:06.185 { 00:10:06.185 "name": "NewBaseBdev", 00:10:06.185 "uuid": "5761d1df-efb8-450d-833e-13328071afa8", 00:10:06.185 "is_configured": true, 00:10:06.185 "data_offset": 0, 00:10:06.185 "data_size": 65536 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "name": "BaseBdev2", 00:10:06.185 "uuid": "92548562-378e-4ec9-8100-02358d40669d", 00:10:06.185 "is_configured": true, 00:10:06.185 "data_offset": 0, 00:10:06.185 "data_size": 65536 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "name": "BaseBdev3", 00:10:06.185 "uuid": "986a57c2-f888-4400-8728-6dbcea41f05f", 00:10:06.185 "is_configured": true, 00:10:06.185 "data_offset": 0, 00:10:06.185 "data_size": 65536 00:10:06.185 }, 00:10:06.185 { 00:10:06.185 "name": "BaseBdev4", 00:10:06.185 "uuid": "35a6a022-c7f8-41e0-b596-45d243ac3f87", 00:10:06.185 "is_configured": true, 00:10:06.185 "data_offset": 0, 00:10:06.185 "data_size": 65536 00:10:06.185 } 00:10:06.185 ] 00:10:06.185 } 00:10:06.185 } 00:10:06.185 }' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.185 BaseBdev2 00:10:06.185 BaseBdev3 00:10:06.185 BaseBdev4' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.185 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.445 [2024-11-26 21:17:24.342434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.445 [2024-11-26 21:17:24.342467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.445 [2024-11-26 21:17:24.342536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.445 [2024-11-26 21:17:24.342601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.445 [2024-11-26 21:17:24.342611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69204 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69204 ']' 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69204 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69204 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.445 killing process with pid 69204 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69204' 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69204 00:10:06.445 [2024-11-26 21:17:24.388995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.445 21:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69204 00:10:06.704 [2024-11-26 21:17:24.768845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.086 00:10:08.086 real 0m11.171s 00:10:08.086 user 0m17.827s 00:10:08.086 sys 0m1.963s 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.086 ************************************ 00:10:08.086 END TEST raid_state_function_test 00:10:08.086 ************************************ 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.086 21:17:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:08.086 21:17:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:08.086 21:17:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.086 21:17:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.086 ************************************ 00:10:08.086 START TEST raid_state_function_test_sb 00:10:08.086 ************************************ 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69870 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69870' 00:10:08.086 Process raid pid: 69870 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69870 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69870 ']' 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.086 21:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.086 [2024-11-26 21:17:26.012072] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:08.086 [2024-11-26 21:17:26.012268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.086 [2024-11-26 21:17:26.184130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.346 [2024-11-26 21:17:26.296750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.346 [2024-11-26 21:17:26.496665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.346 [2024-11-26 21:17:26.496803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.916 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.916 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:08.916 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.916 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.916 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.916 [2024-11-26 21:17:26.872583] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.916 [2024-11-26 21:17:26.872647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.916 [2024-11-26 21:17:26.872666] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.916 [2024-11-26 21:17:26.872677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.916 [2024-11-26 21:17:26.872683] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.916 [2024-11-26 21:17:26.872691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.916 [2024-11-26 21:17:26.872697] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.917 [2024-11-26 21:17:26.872706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.917 "name": "Existed_Raid", 00:10:08.917 "uuid": "66c435bb-e58a-4260-b616-3d695a3aa1b6", 00:10:08.917 "strip_size_kb": 64, 00:10:08.917 "state": "configuring", 00:10:08.917 "raid_level": "raid0", 00:10:08.917 "superblock": true, 00:10:08.917 "num_base_bdevs": 4, 00:10:08.917 "num_base_bdevs_discovered": 0, 00:10:08.917 "num_base_bdevs_operational": 4, 00:10:08.917 "base_bdevs_list": [ 00:10:08.917 { 00:10:08.917 "name": "BaseBdev1", 00:10:08.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.917 "is_configured": false, 00:10:08.917 "data_offset": 0, 00:10:08.917 "data_size": 0 00:10:08.917 }, 00:10:08.917 { 00:10:08.917 "name": "BaseBdev2", 00:10:08.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.917 "is_configured": false, 00:10:08.917 "data_offset": 0, 00:10:08.917 "data_size": 0 00:10:08.917 }, 00:10:08.917 { 00:10:08.917 "name": "BaseBdev3", 00:10:08.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.917 "is_configured": false, 00:10:08.917 "data_offset": 0, 00:10:08.917 "data_size": 0 00:10:08.917 }, 00:10:08.917 { 00:10:08.917 "name": "BaseBdev4", 00:10:08.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.917 "is_configured": false, 00:10:08.917 "data_offset": 0, 00:10:08.917 "data_size": 0 00:10:08.917 } 00:10:08.917 ] 00:10:08.917 }' 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.917 21:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.178 [2024-11-26 21:17:27.311741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.178 [2024-11-26 21:17:27.311885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.178 [2024-11-26 21:17:27.323754] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.178 [2024-11-26 21:17:27.323894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.178 [2024-11-26 21:17:27.323926] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.178 [2024-11-26 21:17:27.323949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.178 [2024-11-26 21:17:27.323986] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.178 [2024-11-26 21:17:27.324010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.178 [2024-11-26 21:17:27.324030] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:09.178 [2024-11-26 21:17:27.324062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.178 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.438 [2024-11-26 21:17:27.372627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.438 BaseBdev1 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.438 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.438 [ 00:10:09.438 { 00:10:09.438 "name": "BaseBdev1", 00:10:09.438 "aliases": [ 00:10:09.438 "7e5b3612-f730-43e3-a145-c3847856ff4d" 00:10:09.438 ], 00:10:09.438 "product_name": "Malloc disk", 00:10:09.438 "block_size": 512, 00:10:09.438 "num_blocks": 65536, 00:10:09.438 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:09.438 "assigned_rate_limits": { 00:10:09.438 "rw_ios_per_sec": 0, 00:10:09.438 "rw_mbytes_per_sec": 0, 00:10:09.438 "r_mbytes_per_sec": 0, 00:10:09.438 "w_mbytes_per_sec": 0 00:10:09.438 }, 00:10:09.438 "claimed": true, 00:10:09.438 "claim_type": "exclusive_write", 00:10:09.438 "zoned": false, 00:10:09.438 "supported_io_types": { 00:10:09.438 "read": true, 00:10:09.438 "write": true, 00:10:09.438 "unmap": true, 00:10:09.438 "flush": true, 00:10:09.438 "reset": true, 00:10:09.438 "nvme_admin": false, 00:10:09.438 "nvme_io": false, 00:10:09.439 "nvme_io_md": false, 00:10:09.439 "write_zeroes": true, 00:10:09.439 "zcopy": true, 00:10:09.439 "get_zone_info": false, 00:10:09.439 "zone_management": false, 00:10:09.439 "zone_append": false, 00:10:09.439 "compare": false, 00:10:09.439 "compare_and_write": false, 00:10:09.439 "abort": true, 00:10:09.439 "seek_hole": false, 00:10:09.439 "seek_data": false, 00:10:09.439 "copy": true, 00:10:09.439 "nvme_iov_md": false 00:10:09.439 }, 00:10:09.439 "memory_domains": [ 00:10:09.439 { 00:10:09.439 "dma_device_id": "system", 00:10:09.439 "dma_device_type": 1 00:10:09.439 }, 00:10:09.439 { 00:10:09.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.439 "dma_device_type": 2 00:10:09.439 } 00:10:09.439 ], 00:10:09.439 "driver_specific": {} 00:10:09.439 } 00:10:09.439 ] 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.439 "name": "Existed_Raid", 00:10:09.439 "uuid": "bce1ba83-a94d-4420-b02b-c711fc98e932", 00:10:09.439 "strip_size_kb": 64, 00:10:09.439 "state": "configuring", 00:10:09.439 "raid_level": "raid0", 00:10:09.439 "superblock": true, 00:10:09.439 "num_base_bdevs": 4, 00:10:09.439 "num_base_bdevs_discovered": 1, 00:10:09.439 "num_base_bdevs_operational": 4, 00:10:09.439 "base_bdevs_list": [ 00:10:09.439 { 00:10:09.439 "name": "BaseBdev1", 00:10:09.439 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:09.439 "is_configured": true, 00:10:09.439 "data_offset": 2048, 00:10:09.439 "data_size": 63488 00:10:09.439 }, 00:10:09.439 { 00:10:09.439 "name": "BaseBdev2", 00:10:09.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.439 "is_configured": false, 00:10:09.439 "data_offset": 0, 00:10:09.439 "data_size": 0 00:10:09.439 }, 00:10:09.439 { 00:10:09.439 "name": "BaseBdev3", 00:10:09.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.439 "is_configured": false, 00:10:09.439 "data_offset": 0, 00:10:09.439 "data_size": 0 00:10:09.439 }, 00:10:09.439 { 00:10:09.439 "name": "BaseBdev4", 00:10:09.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.439 "is_configured": false, 00:10:09.439 "data_offset": 0, 00:10:09.439 "data_size": 0 00:10:09.439 } 00:10:09.439 ] 00:10:09.439 }' 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.439 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.700 [2024-11-26 21:17:27.831908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.700 [2024-11-26 21:17:27.831989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.700 [2024-11-26 21:17:27.843937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.700 [2024-11-26 21:17:27.845822] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.700 [2024-11-26 21:17:27.845906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.700 [2024-11-26 21:17:27.845936] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.700 [2024-11-26 21:17:27.845968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.700 [2024-11-26 21:17:27.845988] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:09.700 [2024-11-26 21:17:27.846009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.700 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.960 "name": "Existed_Raid", 00:10:09.960 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:09.960 "strip_size_kb": 64, 00:10:09.960 "state": "configuring", 00:10:09.960 "raid_level": "raid0", 00:10:09.960 "superblock": true, 00:10:09.960 "num_base_bdevs": 4, 00:10:09.960 "num_base_bdevs_discovered": 1, 00:10:09.960 "num_base_bdevs_operational": 4, 00:10:09.960 "base_bdevs_list": [ 00:10:09.960 { 00:10:09.960 "name": "BaseBdev1", 00:10:09.960 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:09.960 "is_configured": true, 00:10:09.960 "data_offset": 2048, 00:10:09.960 "data_size": 63488 00:10:09.960 }, 00:10:09.960 { 00:10:09.960 "name": "BaseBdev2", 00:10:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.960 "is_configured": false, 00:10:09.960 "data_offset": 0, 00:10:09.960 "data_size": 0 00:10:09.960 }, 00:10:09.960 { 00:10:09.960 "name": "BaseBdev3", 00:10:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.960 "is_configured": false, 00:10:09.960 "data_offset": 0, 00:10:09.960 "data_size": 0 00:10:09.960 }, 00:10:09.960 { 00:10:09.960 "name": "BaseBdev4", 00:10:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.960 "is_configured": false, 00:10:09.960 "data_offset": 0, 00:10:09.960 "data_size": 0 00:10:09.960 } 00:10:09.960 ] 00:10:09.960 }' 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.960 21:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.220 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.220 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.220 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.220 [2024-11-26 21:17:28.340116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.220 BaseBdev2 00:10:10.220 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.221 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.221 [ 00:10:10.221 { 00:10:10.221 "name": "BaseBdev2", 00:10:10.221 "aliases": [ 00:10:10.221 "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3" 00:10:10.221 ], 00:10:10.221 "product_name": "Malloc disk", 00:10:10.221 "block_size": 512, 00:10:10.221 "num_blocks": 65536, 00:10:10.221 "uuid": "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3", 00:10:10.221 "assigned_rate_limits": { 00:10:10.221 "rw_ios_per_sec": 0, 00:10:10.221 "rw_mbytes_per_sec": 0, 00:10:10.221 "r_mbytes_per_sec": 0, 00:10:10.221 "w_mbytes_per_sec": 0 00:10:10.221 }, 00:10:10.221 "claimed": true, 00:10:10.221 "claim_type": "exclusive_write", 00:10:10.221 "zoned": false, 00:10:10.221 "supported_io_types": { 00:10:10.221 "read": true, 00:10:10.221 "write": true, 00:10:10.221 "unmap": true, 00:10:10.221 "flush": true, 00:10:10.221 "reset": true, 00:10:10.221 "nvme_admin": false, 00:10:10.221 "nvme_io": false, 00:10:10.221 "nvme_io_md": false, 00:10:10.221 "write_zeroes": true, 00:10:10.221 "zcopy": true, 00:10:10.221 "get_zone_info": false, 00:10:10.221 "zone_management": false, 00:10:10.221 "zone_append": false, 00:10:10.221 "compare": false, 00:10:10.221 "compare_and_write": false, 00:10:10.221 "abort": true, 00:10:10.221 "seek_hole": false, 00:10:10.221 "seek_data": false, 00:10:10.481 "copy": true, 00:10:10.481 "nvme_iov_md": false 00:10:10.481 }, 00:10:10.481 "memory_domains": [ 00:10:10.481 { 00:10:10.481 "dma_device_id": "system", 00:10:10.481 "dma_device_type": 1 00:10:10.481 }, 00:10:10.481 { 00:10:10.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.481 "dma_device_type": 2 00:10:10.481 } 00:10:10.481 ], 00:10:10.481 "driver_specific": {} 00:10:10.481 } 00:10:10.481 ] 00:10:10.481 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.481 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.481 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.482 "name": "Existed_Raid", 00:10:10.482 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:10.482 "strip_size_kb": 64, 00:10:10.482 "state": "configuring", 00:10:10.482 "raid_level": "raid0", 00:10:10.482 "superblock": true, 00:10:10.482 "num_base_bdevs": 4, 00:10:10.482 "num_base_bdevs_discovered": 2, 00:10:10.482 "num_base_bdevs_operational": 4, 00:10:10.482 "base_bdevs_list": [ 00:10:10.482 { 00:10:10.482 "name": "BaseBdev1", 00:10:10.482 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:10.482 "is_configured": true, 00:10:10.482 "data_offset": 2048, 00:10:10.482 "data_size": 63488 00:10:10.482 }, 00:10:10.482 { 00:10:10.482 "name": "BaseBdev2", 00:10:10.482 "uuid": "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3", 00:10:10.482 "is_configured": true, 00:10:10.482 "data_offset": 2048, 00:10:10.482 "data_size": 63488 00:10:10.482 }, 00:10:10.482 { 00:10:10.482 "name": "BaseBdev3", 00:10:10.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.482 "is_configured": false, 00:10:10.482 "data_offset": 0, 00:10:10.482 "data_size": 0 00:10:10.482 }, 00:10:10.482 { 00:10:10.482 "name": "BaseBdev4", 00:10:10.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.482 "is_configured": false, 00:10:10.482 "data_offset": 0, 00:10:10.482 "data_size": 0 00:10:10.482 } 00:10:10.482 ] 00:10:10.482 }' 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.482 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.742 [2024-11-26 21:17:28.839910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.742 BaseBdev3 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.742 [ 00:10:10.742 { 00:10:10.742 "name": "BaseBdev3", 00:10:10.742 "aliases": [ 00:10:10.742 "1df0645a-ad0a-4748-a449-5e69ef7ce651" 00:10:10.742 ], 00:10:10.742 "product_name": "Malloc disk", 00:10:10.742 "block_size": 512, 00:10:10.742 "num_blocks": 65536, 00:10:10.742 "uuid": "1df0645a-ad0a-4748-a449-5e69ef7ce651", 00:10:10.742 "assigned_rate_limits": { 00:10:10.742 "rw_ios_per_sec": 0, 00:10:10.742 "rw_mbytes_per_sec": 0, 00:10:10.742 "r_mbytes_per_sec": 0, 00:10:10.742 "w_mbytes_per_sec": 0 00:10:10.742 }, 00:10:10.742 "claimed": true, 00:10:10.742 "claim_type": "exclusive_write", 00:10:10.742 "zoned": false, 00:10:10.742 "supported_io_types": { 00:10:10.742 "read": true, 00:10:10.742 "write": true, 00:10:10.742 "unmap": true, 00:10:10.742 "flush": true, 00:10:10.742 "reset": true, 00:10:10.742 "nvme_admin": false, 00:10:10.742 "nvme_io": false, 00:10:10.742 "nvme_io_md": false, 00:10:10.742 "write_zeroes": true, 00:10:10.742 "zcopy": true, 00:10:10.742 "get_zone_info": false, 00:10:10.742 "zone_management": false, 00:10:10.742 "zone_append": false, 00:10:10.742 "compare": false, 00:10:10.742 "compare_and_write": false, 00:10:10.742 "abort": true, 00:10:10.742 "seek_hole": false, 00:10:10.742 "seek_data": false, 00:10:10.742 "copy": true, 00:10:10.742 "nvme_iov_md": false 00:10:10.742 }, 00:10:10.742 "memory_domains": [ 00:10:10.742 { 00:10:10.742 "dma_device_id": "system", 00:10:10.742 "dma_device_type": 1 00:10:10.742 }, 00:10:10.742 { 00:10:10.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.742 "dma_device_type": 2 00:10:10.742 } 00:10:10.742 ], 00:10:10.742 "driver_specific": {} 00:10:10.742 } 00:10:10.742 ] 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.742 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.743 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.002 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.003 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.003 "name": "Existed_Raid", 00:10:11.003 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:11.003 "strip_size_kb": 64, 00:10:11.003 "state": "configuring", 00:10:11.003 "raid_level": "raid0", 00:10:11.003 "superblock": true, 00:10:11.003 "num_base_bdevs": 4, 00:10:11.003 "num_base_bdevs_discovered": 3, 00:10:11.003 "num_base_bdevs_operational": 4, 00:10:11.003 "base_bdevs_list": [ 00:10:11.003 { 00:10:11.003 "name": "BaseBdev1", 00:10:11.003 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "name": "BaseBdev2", 00:10:11.003 "uuid": "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "name": "BaseBdev3", 00:10:11.003 "uuid": "1df0645a-ad0a-4748-a449-5e69ef7ce651", 00:10:11.003 "is_configured": true, 00:10:11.003 "data_offset": 2048, 00:10:11.003 "data_size": 63488 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "name": "BaseBdev4", 00:10:11.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.003 "is_configured": false, 00:10:11.003 "data_offset": 0, 00:10:11.003 "data_size": 0 00:10:11.003 } 00:10:11.003 ] 00:10:11.003 }' 00:10:11.003 21:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.003 21:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.264 [2024-11-26 21:17:29.369949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:11.264 [2024-11-26 21:17:29.370224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.264 [2024-11-26 21:17:29.370240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:11.264 [2024-11-26 21:17:29.370531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:11.264 [2024-11-26 21:17:29.370689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.264 [2024-11-26 21:17:29.370701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:11.264 BaseBdev4 00:10:11.264 [2024-11-26 21:17:29.370827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.264 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.264 [ 00:10:11.264 { 00:10:11.264 "name": "BaseBdev4", 00:10:11.264 "aliases": [ 00:10:11.264 "e228684b-2b37-4d49-937e-8ed2338ee9e5" 00:10:11.264 ], 00:10:11.264 "product_name": "Malloc disk", 00:10:11.264 "block_size": 512, 00:10:11.264 "num_blocks": 65536, 00:10:11.264 "uuid": "e228684b-2b37-4d49-937e-8ed2338ee9e5", 00:10:11.265 "assigned_rate_limits": { 00:10:11.265 "rw_ios_per_sec": 0, 00:10:11.265 "rw_mbytes_per_sec": 0, 00:10:11.265 "r_mbytes_per_sec": 0, 00:10:11.265 "w_mbytes_per_sec": 0 00:10:11.265 }, 00:10:11.265 "claimed": true, 00:10:11.265 "claim_type": "exclusive_write", 00:10:11.265 "zoned": false, 00:10:11.265 "supported_io_types": { 00:10:11.265 "read": true, 00:10:11.265 "write": true, 00:10:11.265 "unmap": true, 00:10:11.265 "flush": true, 00:10:11.265 "reset": true, 00:10:11.265 "nvme_admin": false, 00:10:11.265 "nvme_io": false, 00:10:11.265 "nvme_io_md": false, 00:10:11.265 "write_zeroes": true, 00:10:11.265 "zcopy": true, 00:10:11.265 "get_zone_info": false, 00:10:11.265 "zone_management": false, 00:10:11.265 "zone_append": false, 00:10:11.265 "compare": false, 00:10:11.265 "compare_and_write": false, 00:10:11.265 "abort": true, 00:10:11.265 "seek_hole": false, 00:10:11.265 "seek_data": false, 00:10:11.265 "copy": true, 00:10:11.265 "nvme_iov_md": false 00:10:11.265 }, 00:10:11.265 "memory_domains": [ 00:10:11.265 { 00:10:11.265 "dma_device_id": "system", 00:10:11.265 "dma_device_type": 1 00:10:11.265 }, 00:10:11.265 { 00:10:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.265 "dma_device_type": 2 00:10:11.265 } 00:10:11.265 ], 00:10:11.265 "driver_specific": {} 00:10:11.265 } 00:10:11.265 ] 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.265 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.523 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.523 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.523 "name": "Existed_Raid", 00:10:11.523 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:11.523 "strip_size_kb": 64, 00:10:11.523 "state": "online", 00:10:11.523 "raid_level": "raid0", 00:10:11.523 "superblock": true, 00:10:11.523 "num_base_bdevs": 4, 00:10:11.523 "num_base_bdevs_discovered": 4, 00:10:11.523 "num_base_bdevs_operational": 4, 00:10:11.523 "base_bdevs_list": [ 00:10:11.523 { 00:10:11.523 "name": "BaseBdev1", 00:10:11.523 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:11.523 "is_configured": true, 00:10:11.523 "data_offset": 2048, 00:10:11.523 "data_size": 63488 00:10:11.523 }, 00:10:11.523 { 00:10:11.523 "name": "BaseBdev2", 00:10:11.523 "uuid": "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3", 00:10:11.523 "is_configured": true, 00:10:11.523 "data_offset": 2048, 00:10:11.523 "data_size": 63488 00:10:11.523 }, 00:10:11.523 { 00:10:11.523 "name": "BaseBdev3", 00:10:11.523 "uuid": "1df0645a-ad0a-4748-a449-5e69ef7ce651", 00:10:11.523 "is_configured": true, 00:10:11.523 "data_offset": 2048, 00:10:11.523 "data_size": 63488 00:10:11.523 }, 00:10:11.523 { 00:10:11.523 "name": "BaseBdev4", 00:10:11.523 "uuid": "e228684b-2b37-4d49-937e-8ed2338ee9e5", 00:10:11.523 "is_configured": true, 00:10:11.523 "data_offset": 2048, 00:10:11.523 "data_size": 63488 00:10:11.523 } 00:10:11.523 ] 00:10:11.523 }' 00:10:11.523 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.523 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.783 [2024-11-26 21:17:29.889436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.783 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.783 "name": "Existed_Raid", 00:10:11.783 "aliases": [ 00:10:11.783 "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe" 00:10:11.783 ], 00:10:11.783 "product_name": "Raid Volume", 00:10:11.783 "block_size": 512, 00:10:11.783 "num_blocks": 253952, 00:10:11.783 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:11.783 "assigned_rate_limits": { 00:10:11.783 "rw_ios_per_sec": 0, 00:10:11.783 "rw_mbytes_per_sec": 0, 00:10:11.783 "r_mbytes_per_sec": 0, 00:10:11.783 "w_mbytes_per_sec": 0 00:10:11.783 }, 00:10:11.783 "claimed": false, 00:10:11.783 "zoned": false, 00:10:11.783 "supported_io_types": { 00:10:11.783 "read": true, 00:10:11.783 "write": true, 00:10:11.783 "unmap": true, 00:10:11.783 "flush": true, 00:10:11.783 "reset": true, 00:10:11.783 "nvme_admin": false, 00:10:11.783 "nvme_io": false, 00:10:11.783 "nvme_io_md": false, 00:10:11.783 "write_zeroes": true, 00:10:11.783 "zcopy": false, 00:10:11.783 "get_zone_info": false, 00:10:11.783 "zone_management": false, 00:10:11.783 "zone_append": false, 00:10:11.783 "compare": false, 00:10:11.783 "compare_and_write": false, 00:10:11.783 "abort": false, 00:10:11.783 "seek_hole": false, 00:10:11.783 "seek_data": false, 00:10:11.784 "copy": false, 00:10:11.784 "nvme_iov_md": false 00:10:11.784 }, 00:10:11.784 "memory_domains": [ 00:10:11.784 { 00:10:11.784 "dma_device_id": "system", 00:10:11.784 "dma_device_type": 1 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.784 "dma_device_type": 2 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "system", 00:10:11.784 "dma_device_type": 1 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.784 "dma_device_type": 2 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "system", 00:10:11.784 "dma_device_type": 1 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.784 "dma_device_type": 2 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "system", 00:10:11.784 "dma_device_type": 1 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.784 "dma_device_type": 2 00:10:11.784 } 00:10:11.784 ], 00:10:11.784 "driver_specific": { 00:10:11.784 "raid": { 00:10:11.784 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:11.784 "strip_size_kb": 64, 00:10:11.784 "state": "online", 00:10:11.784 "raid_level": "raid0", 00:10:11.784 "superblock": true, 00:10:11.784 "num_base_bdevs": 4, 00:10:11.784 "num_base_bdevs_discovered": 4, 00:10:11.784 "num_base_bdevs_operational": 4, 00:10:11.784 "base_bdevs_list": [ 00:10:11.784 { 00:10:11.784 "name": "BaseBdev1", 00:10:11.784 "uuid": "7e5b3612-f730-43e3-a145-c3847856ff4d", 00:10:11.784 "is_configured": true, 00:10:11.784 "data_offset": 2048, 00:10:11.784 "data_size": 63488 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "name": "BaseBdev2", 00:10:11.784 "uuid": "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3", 00:10:11.784 "is_configured": true, 00:10:11.784 "data_offset": 2048, 00:10:11.784 "data_size": 63488 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "name": "BaseBdev3", 00:10:11.784 "uuid": "1df0645a-ad0a-4748-a449-5e69ef7ce651", 00:10:11.784 "is_configured": true, 00:10:11.784 "data_offset": 2048, 00:10:11.784 "data_size": 63488 00:10:11.784 }, 00:10:11.784 { 00:10:11.784 "name": "BaseBdev4", 00:10:11.784 "uuid": "e228684b-2b37-4d49-937e-8ed2338ee9e5", 00:10:11.784 "is_configured": true, 00:10:11.784 "data_offset": 2048, 00:10:11.784 "data_size": 63488 00:10:11.784 } 00:10:11.784 ] 00:10:11.784 } 00:10:11.784 } 00:10:11.784 }' 00:10:11.784 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.047 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:12.047 BaseBdev2 00:10:12.047 BaseBdev3 00:10:12.047 BaseBdev4' 00:10:12.047 21:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:12.047 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.048 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.048 [2024-11-26 21:17:30.200613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.318 [2024-11-26 21:17:30.200698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.318 [2024-11-26 21:17:30.200754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.318 "name": "Existed_Raid", 00:10:12.318 "uuid": "120461e1-a7e8-4b3b-ad0b-80d98ad2d7fe", 00:10:12.318 "strip_size_kb": 64, 00:10:12.318 "state": "offline", 00:10:12.318 "raid_level": "raid0", 00:10:12.318 "superblock": true, 00:10:12.318 "num_base_bdevs": 4, 00:10:12.318 "num_base_bdevs_discovered": 3, 00:10:12.318 "num_base_bdevs_operational": 3, 00:10:12.318 "base_bdevs_list": [ 00:10:12.318 { 00:10:12.318 "name": null, 00:10:12.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.318 "is_configured": false, 00:10:12.318 "data_offset": 0, 00:10:12.318 "data_size": 63488 00:10:12.318 }, 00:10:12.318 { 00:10:12.318 "name": "BaseBdev2", 00:10:12.318 "uuid": "076a63b6-dcd6-4cdc-a9ae-110f0d228ed3", 00:10:12.318 "is_configured": true, 00:10:12.318 "data_offset": 2048, 00:10:12.318 "data_size": 63488 00:10:12.318 }, 00:10:12.318 { 00:10:12.318 "name": "BaseBdev3", 00:10:12.318 "uuid": "1df0645a-ad0a-4748-a449-5e69ef7ce651", 00:10:12.318 "is_configured": true, 00:10:12.318 "data_offset": 2048, 00:10:12.318 "data_size": 63488 00:10:12.318 }, 00:10:12.318 { 00:10:12.318 "name": "BaseBdev4", 00:10:12.318 "uuid": "e228684b-2b37-4d49-937e-8ed2338ee9e5", 00:10:12.318 "is_configured": true, 00:10:12.318 "data_offset": 2048, 00:10:12.318 "data_size": 63488 00:10:12.318 } 00:10:12.318 ] 00:10:12.318 }' 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.318 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.918 [2024-11-26 21:17:30.812526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.918 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.919 21:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.919 [2024-11-26 21:17:30.965494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.919 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 [2024-11-26 21:17:31.118792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:13.179 [2024-11-26 21:17:31.118847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 BaseBdev2 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.179 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.179 [ 00:10:13.179 { 00:10:13.179 "name": "BaseBdev2", 00:10:13.179 "aliases": [ 00:10:13.440 "dd5ff201-c735-4a54-a23f-520be17e1cfc" 00:10:13.440 ], 00:10:13.440 "product_name": "Malloc disk", 00:10:13.440 "block_size": 512, 00:10:13.440 "num_blocks": 65536, 00:10:13.440 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:13.440 "assigned_rate_limits": { 00:10:13.440 "rw_ios_per_sec": 0, 00:10:13.440 "rw_mbytes_per_sec": 0, 00:10:13.440 "r_mbytes_per_sec": 0, 00:10:13.440 "w_mbytes_per_sec": 0 00:10:13.440 }, 00:10:13.440 "claimed": false, 00:10:13.440 "zoned": false, 00:10:13.440 "supported_io_types": { 00:10:13.440 "read": true, 00:10:13.440 "write": true, 00:10:13.440 "unmap": true, 00:10:13.440 "flush": true, 00:10:13.440 "reset": true, 00:10:13.440 "nvme_admin": false, 00:10:13.440 "nvme_io": false, 00:10:13.440 "nvme_io_md": false, 00:10:13.440 "write_zeroes": true, 00:10:13.440 "zcopy": true, 00:10:13.440 "get_zone_info": false, 00:10:13.440 "zone_management": false, 00:10:13.440 "zone_append": false, 00:10:13.440 "compare": false, 00:10:13.440 "compare_and_write": false, 00:10:13.440 "abort": true, 00:10:13.440 "seek_hole": false, 00:10:13.440 "seek_data": false, 00:10:13.440 "copy": true, 00:10:13.440 "nvme_iov_md": false 00:10:13.440 }, 00:10:13.440 "memory_domains": [ 00:10:13.440 { 00:10:13.440 "dma_device_id": "system", 00:10:13.440 "dma_device_type": 1 00:10:13.440 }, 00:10:13.440 { 00:10:13.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.440 "dma_device_type": 2 00:10:13.440 } 00:10:13.440 ], 00:10:13.440 "driver_specific": {} 00:10:13.440 } 00:10:13.440 ] 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.440 BaseBdev3 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.440 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.440 [ 00:10:13.440 { 00:10:13.440 "name": "BaseBdev3", 00:10:13.440 "aliases": [ 00:10:13.440 "dd3db460-9327-42bc-a4fa-7cf0b96e78f9" 00:10:13.440 ], 00:10:13.440 "product_name": "Malloc disk", 00:10:13.440 "block_size": 512, 00:10:13.440 "num_blocks": 65536, 00:10:13.440 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:13.440 "assigned_rate_limits": { 00:10:13.440 "rw_ios_per_sec": 0, 00:10:13.440 "rw_mbytes_per_sec": 0, 00:10:13.440 "r_mbytes_per_sec": 0, 00:10:13.440 "w_mbytes_per_sec": 0 00:10:13.440 }, 00:10:13.440 "claimed": false, 00:10:13.440 "zoned": false, 00:10:13.440 "supported_io_types": { 00:10:13.440 "read": true, 00:10:13.440 "write": true, 00:10:13.440 "unmap": true, 00:10:13.440 "flush": true, 00:10:13.440 "reset": true, 00:10:13.440 "nvme_admin": false, 00:10:13.440 "nvme_io": false, 00:10:13.440 "nvme_io_md": false, 00:10:13.440 "write_zeroes": true, 00:10:13.440 "zcopy": true, 00:10:13.440 "get_zone_info": false, 00:10:13.440 "zone_management": false, 00:10:13.440 "zone_append": false, 00:10:13.440 "compare": false, 00:10:13.440 "compare_and_write": false, 00:10:13.440 "abort": true, 00:10:13.440 "seek_hole": false, 00:10:13.440 "seek_data": false, 00:10:13.440 "copy": true, 00:10:13.440 "nvme_iov_md": false 00:10:13.440 }, 00:10:13.440 "memory_domains": [ 00:10:13.440 { 00:10:13.440 "dma_device_id": "system", 00:10:13.440 "dma_device_type": 1 00:10:13.440 }, 00:10:13.440 { 00:10:13.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.440 "dma_device_type": 2 00:10:13.440 } 00:10:13.440 ], 00:10:13.440 "driver_specific": {} 00:10:13.440 } 00:10:13.440 ] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.441 BaseBdev4 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.441 [ 00:10:13.441 { 00:10:13.441 "name": "BaseBdev4", 00:10:13.441 "aliases": [ 00:10:13.441 "444e2038-fd4b-4833-99ae-1ebbc6301088" 00:10:13.441 ], 00:10:13.441 "product_name": "Malloc disk", 00:10:13.441 "block_size": 512, 00:10:13.441 "num_blocks": 65536, 00:10:13.441 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:13.441 "assigned_rate_limits": { 00:10:13.441 "rw_ios_per_sec": 0, 00:10:13.441 "rw_mbytes_per_sec": 0, 00:10:13.441 "r_mbytes_per_sec": 0, 00:10:13.441 "w_mbytes_per_sec": 0 00:10:13.441 }, 00:10:13.441 "claimed": false, 00:10:13.441 "zoned": false, 00:10:13.441 "supported_io_types": { 00:10:13.441 "read": true, 00:10:13.441 "write": true, 00:10:13.441 "unmap": true, 00:10:13.441 "flush": true, 00:10:13.441 "reset": true, 00:10:13.441 "nvme_admin": false, 00:10:13.441 "nvme_io": false, 00:10:13.441 "nvme_io_md": false, 00:10:13.441 "write_zeroes": true, 00:10:13.441 "zcopy": true, 00:10:13.441 "get_zone_info": false, 00:10:13.441 "zone_management": false, 00:10:13.441 "zone_append": false, 00:10:13.441 "compare": false, 00:10:13.441 "compare_and_write": false, 00:10:13.441 "abort": true, 00:10:13.441 "seek_hole": false, 00:10:13.441 "seek_data": false, 00:10:13.441 "copy": true, 00:10:13.441 "nvme_iov_md": false 00:10:13.441 }, 00:10:13.441 "memory_domains": [ 00:10:13.441 { 00:10:13.441 "dma_device_id": "system", 00:10:13.441 "dma_device_type": 1 00:10:13.441 }, 00:10:13.441 { 00:10:13.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.441 "dma_device_type": 2 00:10:13.441 } 00:10:13.441 ], 00:10:13.441 "driver_specific": {} 00:10:13.441 } 00:10:13.441 ] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.441 [2024-11-26 21:17:31.510330] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.441 [2024-11-26 21:17:31.510478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.441 [2024-11-26 21:17:31.510520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.441 [2024-11-26 21:17:31.512337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.441 [2024-11-26 21:17:31.512434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.441 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.441 "name": "Existed_Raid", 00:10:13.441 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:13.441 "strip_size_kb": 64, 00:10:13.441 "state": "configuring", 00:10:13.441 "raid_level": "raid0", 00:10:13.441 "superblock": true, 00:10:13.441 "num_base_bdevs": 4, 00:10:13.441 "num_base_bdevs_discovered": 3, 00:10:13.441 "num_base_bdevs_operational": 4, 00:10:13.441 "base_bdevs_list": [ 00:10:13.441 { 00:10:13.441 "name": "BaseBdev1", 00:10:13.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.441 "is_configured": false, 00:10:13.441 "data_offset": 0, 00:10:13.441 "data_size": 0 00:10:13.441 }, 00:10:13.441 { 00:10:13.441 "name": "BaseBdev2", 00:10:13.441 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:13.441 "is_configured": true, 00:10:13.441 "data_offset": 2048, 00:10:13.441 "data_size": 63488 00:10:13.441 }, 00:10:13.441 { 00:10:13.441 "name": "BaseBdev3", 00:10:13.441 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:13.441 "is_configured": true, 00:10:13.441 "data_offset": 2048, 00:10:13.441 "data_size": 63488 00:10:13.441 }, 00:10:13.442 { 00:10:13.442 "name": "BaseBdev4", 00:10:13.442 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:13.442 "is_configured": true, 00:10:13.442 "data_offset": 2048, 00:10:13.442 "data_size": 63488 00:10:13.442 } 00:10:13.442 ] 00:10:13.442 }' 00:10:13.442 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.442 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.011 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.012 [2024-11-26 21:17:31.941599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.012 "name": "Existed_Raid", 00:10:14.012 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:14.012 "strip_size_kb": 64, 00:10:14.012 "state": "configuring", 00:10:14.012 "raid_level": "raid0", 00:10:14.012 "superblock": true, 00:10:14.012 "num_base_bdevs": 4, 00:10:14.012 "num_base_bdevs_discovered": 2, 00:10:14.012 "num_base_bdevs_operational": 4, 00:10:14.012 "base_bdevs_list": [ 00:10:14.012 { 00:10:14.012 "name": "BaseBdev1", 00:10:14.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.012 "is_configured": false, 00:10:14.012 "data_offset": 0, 00:10:14.012 "data_size": 0 00:10:14.012 }, 00:10:14.012 { 00:10:14.012 "name": null, 00:10:14.012 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:14.012 "is_configured": false, 00:10:14.012 "data_offset": 0, 00:10:14.012 "data_size": 63488 00:10:14.012 }, 00:10:14.012 { 00:10:14.012 "name": "BaseBdev3", 00:10:14.012 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:14.012 "is_configured": true, 00:10:14.012 "data_offset": 2048, 00:10:14.012 "data_size": 63488 00:10:14.012 }, 00:10:14.012 { 00:10:14.012 "name": "BaseBdev4", 00:10:14.012 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:14.012 "is_configured": true, 00:10:14.012 "data_offset": 2048, 00:10:14.012 "data_size": 63488 00:10:14.012 } 00:10:14.012 ] 00:10:14.012 }' 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.012 21:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.271 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.271 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:14.272 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.272 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.531 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.531 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:14.531 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.532 [2024-11-26 21:17:32.504939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.532 BaseBdev1 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.532 [ 00:10:14.532 { 00:10:14.532 "name": "BaseBdev1", 00:10:14.532 "aliases": [ 00:10:14.532 "5120709a-0109-45de-8f56-5643fefd7b81" 00:10:14.532 ], 00:10:14.532 "product_name": "Malloc disk", 00:10:14.532 "block_size": 512, 00:10:14.532 "num_blocks": 65536, 00:10:14.532 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:14.532 "assigned_rate_limits": { 00:10:14.532 "rw_ios_per_sec": 0, 00:10:14.532 "rw_mbytes_per_sec": 0, 00:10:14.532 "r_mbytes_per_sec": 0, 00:10:14.532 "w_mbytes_per_sec": 0 00:10:14.532 }, 00:10:14.532 "claimed": true, 00:10:14.532 "claim_type": "exclusive_write", 00:10:14.532 "zoned": false, 00:10:14.532 "supported_io_types": { 00:10:14.532 "read": true, 00:10:14.532 "write": true, 00:10:14.532 "unmap": true, 00:10:14.532 "flush": true, 00:10:14.532 "reset": true, 00:10:14.532 "nvme_admin": false, 00:10:14.532 "nvme_io": false, 00:10:14.532 "nvme_io_md": false, 00:10:14.532 "write_zeroes": true, 00:10:14.532 "zcopy": true, 00:10:14.532 "get_zone_info": false, 00:10:14.532 "zone_management": false, 00:10:14.532 "zone_append": false, 00:10:14.532 "compare": false, 00:10:14.532 "compare_and_write": false, 00:10:14.532 "abort": true, 00:10:14.532 "seek_hole": false, 00:10:14.532 "seek_data": false, 00:10:14.532 "copy": true, 00:10:14.532 "nvme_iov_md": false 00:10:14.532 }, 00:10:14.532 "memory_domains": [ 00:10:14.532 { 00:10:14.532 "dma_device_id": "system", 00:10:14.532 "dma_device_type": 1 00:10:14.532 }, 00:10:14.532 { 00:10:14.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.532 "dma_device_type": 2 00:10:14.532 } 00:10:14.532 ], 00:10:14.532 "driver_specific": {} 00:10:14.532 } 00:10:14.532 ] 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.532 "name": "Existed_Raid", 00:10:14.532 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:14.532 "strip_size_kb": 64, 00:10:14.532 "state": "configuring", 00:10:14.532 "raid_level": "raid0", 00:10:14.532 "superblock": true, 00:10:14.532 "num_base_bdevs": 4, 00:10:14.532 "num_base_bdevs_discovered": 3, 00:10:14.532 "num_base_bdevs_operational": 4, 00:10:14.532 "base_bdevs_list": [ 00:10:14.532 { 00:10:14.532 "name": "BaseBdev1", 00:10:14.532 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:14.532 "is_configured": true, 00:10:14.532 "data_offset": 2048, 00:10:14.532 "data_size": 63488 00:10:14.532 }, 00:10:14.532 { 00:10:14.532 "name": null, 00:10:14.532 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:14.532 "is_configured": false, 00:10:14.532 "data_offset": 0, 00:10:14.532 "data_size": 63488 00:10:14.532 }, 00:10:14.532 { 00:10:14.532 "name": "BaseBdev3", 00:10:14.532 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:14.532 "is_configured": true, 00:10:14.532 "data_offset": 2048, 00:10:14.532 "data_size": 63488 00:10:14.532 }, 00:10:14.532 { 00:10:14.532 "name": "BaseBdev4", 00:10:14.532 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:14.532 "is_configured": true, 00:10:14.532 "data_offset": 2048, 00:10:14.532 "data_size": 63488 00:10:14.532 } 00:10:14.532 ] 00:10:14.532 }' 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.532 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.102 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.103 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.103 21:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.103 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.103 21:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.103 [2024-11-26 21:17:33.032109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.103 "name": "Existed_Raid", 00:10:15.103 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:15.103 "strip_size_kb": 64, 00:10:15.103 "state": "configuring", 00:10:15.103 "raid_level": "raid0", 00:10:15.103 "superblock": true, 00:10:15.103 "num_base_bdevs": 4, 00:10:15.103 "num_base_bdevs_discovered": 2, 00:10:15.103 "num_base_bdevs_operational": 4, 00:10:15.103 "base_bdevs_list": [ 00:10:15.103 { 00:10:15.103 "name": "BaseBdev1", 00:10:15.103 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:15.103 "is_configured": true, 00:10:15.103 "data_offset": 2048, 00:10:15.103 "data_size": 63488 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "name": null, 00:10:15.103 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:15.103 "is_configured": false, 00:10:15.103 "data_offset": 0, 00:10:15.103 "data_size": 63488 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "name": null, 00:10:15.103 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:15.103 "is_configured": false, 00:10:15.103 "data_offset": 0, 00:10:15.103 "data_size": 63488 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "name": "BaseBdev4", 00:10:15.103 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:15.103 "is_configured": true, 00:10:15.103 "data_offset": 2048, 00:10:15.103 "data_size": 63488 00:10:15.103 } 00:10:15.103 ] 00:10:15.103 }' 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.103 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.363 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.364 [2024-11-26 21:17:33.503301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.364 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.624 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.624 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.624 "name": "Existed_Raid", 00:10:15.624 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:15.624 "strip_size_kb": 64, 00:10:15.624 "state": "configuring", 00:10:15.624 "raid_level": "raid0", 00:10:15.624 "superblock": true, 00:10:15.624 "num_base_bdevs": 4, 00:10:15.624 "num_base_bdevs_discovered": 3, 00:10:15.624 "num_base_bdevs_operational": 4, 00:10:15.624 "base_bdevs_list": [ 00:10:15.624 { 00:10:15.624 "name": "BaseBdev1", 00:10:15.624 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:15.624 "is_configured": true, 00:10:15.624 "data_offset": 2048, 00:10:15.624 "data_size": 63488 00:10:15.624 }, 00:10:15.624 { 00:10:15.624 "name": null, 00:10:15.624 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:15.624 "is_configured": false, 00:10:15.624 "data_offset": 0, 00:10:15.624 "data_size": 63488 00:10:15.624 }, 00:10:15.624 { 00:10:15.624 "name": "BaseBdev3", 00:10:15.624 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:15.624 "is_configured": true, 00:10:15.624 "data_offset": 2048, 00:10:15.624 "data_size": 63488 00:10:15.624 }, 00:10:15.624 { 00:10:15.624 "name": "BaseBdev4", 00:10:15.624 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:15.624 "is_configured": true, 00:10:15.624 "data_offset": 2048, 00:10:15.624 "data_size": 63488 00:10:15.624 } 00:10:15.624 ] 00:10:15.624 }' 00:10:15.624 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.624 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.884 21:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 [2024-11-26 21:17:33.978536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.144 "name": "Existed_Raid", 00:10:16.144 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:16.144 "strip_size_kb": 64, 00:10:16.144 "state": "configuring", 00:10:16.144 "raid_level": "raid0", 00:10:16.144 "superblock": true, 00:10:16.144 "num_base_bdevs": 4, 00:10:16.144 "num_base_bdevs_discovered": 2, 00:10:16.144 "num_base_bdevs_operational": 4, 00:10:16.144 "base_bdevs_list": [ 00:10:16.144 { 00:10:16.144 "name": null, 00:10:16.144 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:16.144 "is_configured": false, 00:10:16.144 "data_offset": 0, 00:10:16.144 "data_size": 63488 00:10:16.144 }, 00:10:16.144 { 00:10:16.144 "name": null, 00:10:16.144 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:16.144 "is_configured": false, 00:10:16.144 "data_offset": 0, 00:10:16.144 "data_size": 63488 00:10:16.144 }, 00:10:16.144 { 00:10:16.144 "name": "BaseBdev3", 00:10:16.144 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:16.144 "is_configured": true, 00:10:16.144 "data_offset": 2048, 00:10:16.144 "data_size": 63488 00:10:16.144 }, 00:10:16.144 { 00:10:16.144 "name": "BaseBdev4", 00:10:16.144 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:16.144 "is_configured": true, 00:10:16.144 "data_offset": 2048, 00:10:16.144 "data_size": 63488 00:10:16.144 } 00:10:16.144 ] 00:10:16.144 }' 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.144 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.405 [2024-11-26 21:17:34.524331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.405 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.665 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.665 "name": "Existed_Raid", 00:10:16.665 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:16.665 "strip_size_kb": 64, 00:10:16.665 "state": "configuring", 00:10:16.665 "raid_level": "raid0", 00:10:16.665 "superblock": true, 00:10:16.665 "num_base_bdevs": 4, 00:10:16.665 "num_base_bdevs_discovered": 3, 00:10:16.665 "num_base_bdevs_operational": 4, 00:10:16.665 "base_bdevs_list": [ 00:10:16.665 { 00:10:16.665 "name": null, 00:10:16.665 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:16.665 "is_configured": false, 00:10:16.665 "data_offset": 0, 00:10:16.665 "data_size": 63488 00:10:16.665 }, 00:10:16.665 { 00:10:16.665 "name": "BaseBdev2", 00:10:16.665 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:16.665 "is_configured": true, 00:10:16.665 "data_offset": 2048, 00:10:16.665 "data_size": 63488 00:10:16.665 }, 00:10:16.665 { 00:10:16.665 "name": "BaseBdev3", 00:10:16.665 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:16.665 "is_configured": true, 00:10:16.665 "data_offset": 2048, 00:10:16.665 "data_size": 63488 00:10:16.665 }, 00:10:16.665 { 00:10:16.665 "name": "BaseBdev4", 00:10:16.665 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:16.665 "is_configured": true, 00:10:16.665 "data_offset": 2048, 00:10:16.665 "data_size": 63488 00:10:16.665 } 00:10:16.665 ] 00:10:16.665 }' 00:10:16.665 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.665 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5120709a-0109-45de-8f56-5643fefd7b81 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.926 21:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.926 NewBaseBdev 00:10:16.926 [2024-11-26 21:17:35.032326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:16.926 [2024-11-26 21:17:35.032580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.926 [2024-11-26 21:17:35.032594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.926 [2024-11-26 21:17:35.032854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:16.926 [2024-11-26 21:17:35.033017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.926 [2024-11-26 21:17:35.033048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:16.926 [2024-11-26 21:17:35.033201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.926 [ 00:10:16.926 { 00:10:16.926 "name": "NewBaseBdev", 00:10:16.926 "aliases": [ 00:10:16.926 "5120709a-0109-45de-8f56-5643fefd7b81" 00:10:16.926 ], 00:10:16.926 "product_name": "Malloc disk", 00:10:16.926 "block_size": 512, 00:10:16.926 "num_blocks": 65536, 00:10:16.926 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:16.926 "assigned_rate_limits": { 00:10:16.926 "rw_ios_per_sec": 0, 00:10:16.926 "rw_mbytes_per_sec": 0, 00:10:16.926 "r_mbytes_per_sec": 0, 00:10:16.926 "w_mbytes_per_sec": 0 00:10:16.926 }, 00:10:16.926 "claimed": true, 00:10:16.926 "claim_type": "exclusive_write", 00:10:16.926 "zoned": false, 00:10:16.926 "supported_io_types": { 00:10:16.926 "read": true, 00:10:16.926 "write": true, 00:10:16.926 "unmap": true, 00:10:16.926 "flush": true, 00:10:16.926 "reset": true, 00:10:16.926 "nvme_admin": false, 00:10:16.926 "nvme_io": false, 00:10:16.926 "nvme_io_md": false, 00:10:16.926 "write_zeroes": true, 00:10:16.926 "zcopy": true, 00:10:16.926 "get_zone_info": false, 00:10:16.926 "zone_management": false, 00:10:16.926 "zone_append": false, 00:10:16.926 "compare": false, 00:10:16.926 "compare_and_write": false, 00:10:16.926 "abort": true, 00:10:16.926 "seek_hole": false, 00:10:16.926 "seek_data": false, 00:10:16.926 "copy": true, 00:10:16.926 "nvme_iov_md": false 00:10:16.926 }, 00:10:16.926 "memory_domains": [ 00:10:16.926 { 00:10:16.926 "dma_device_id": "system", 00:10:16.926 "dma_device_type": 1 00:10:16.926 }, 00:10:16.926 { 00:10:16.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.926 "dma_device_type": 2 00:10:16.926 } 00:10:16.926 ], 00:10:16.926 "driver_specific": {} 00:10:16.926 } 00:10:16.926 ] 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.926 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.927 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.187 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.187 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.187 "name": "Existed_Raid", 00:10:17.187 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:17.187 "strip_size_kb": 64, 00:10:17.187 "state": "online", 00:10:17.187 "raid_level": "raid0", 00:10:17.187 "superblock": true, 00:10:17.187 "num_base_bdevs": 4, 00:10:17.187 "num_base_bdevs_discovered": 4, 00:10:17.187 "num_base_bdevs_operational": 4, 00:10:17.187 "base_bdevs_list": [ 00:10:17.187 { 00:10:17.187 "name": "NewBaseBdev", 00:10:17.187 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:17.187 "is_configured": true, 00:10:17.187 "data_offset": 2048, 00:10:17.187 "data_size": 63488 00:10:17.187 }, 00:10:17.187 { 00:10:17.187 "name": "BaseBdev2", 00:10:17.187 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:17.187 "is_configured": true, 00:10:17.187 "data_offset": 2048, 00:10:17.187 "data_size": 63488 00:10:17.187 }, 00:10:17.187 { 00:10:17.187 "name": "BaseBdev3", 00:10:17.187 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:17.187 "is_configured": true, 00:10:17.187 "data_offset": 2048, 00:10:17.187 "data_size": 63488 00:10:17.187 }, 00:10:17.187 { 00:10:17.187 "name": "BaseBdev4", 00:10:17.187 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:17.187 "is_configured": true, 00:10:17.187 "data_offset": 2048, 00:10:17.187 "data_size": 63488 00:10:17.187 } 00:10:17.187 ] 00:10:17.187 }' 00:10:17.187 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.187 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.447 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.448 [2024-11-26 21:17:35.512046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.448 "name": "Existed_Raid", 00:10:17.448 "aliases": [ 00:10:17.448 "89278ef7-555c-44f1-915a-6599348bdc09" 00:10:17.448 ], 00:10:17.448 "product_name": "Raid Volume", 00:10:17.448 "block_size": 512, 00:10:17.448 "num_blocks": 253952, 00:10:17.448 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:17.448 "assigned_rate_limits": { 00:10:17.448 "rw_ios_per_sec": 0, 00:10:17.448 "rw_mbytes_per_sec": 0, 00:10:17.448 "r_mbytes_per_sec": 0, 00:10:17.448 "w_mbytes_per_sec": 0 00:10:17.448 }, 00:10:17.448 "claimed": false, 00:10:17.448 "zoned": false, 00:10:17.448 "supported_io_types": { 00:10:17.448 "read": true, 00:10:17.448 "write": true, 00:10:17.448 "unmap": true, 00:10:17.448 "flush": true, 00:10:17.448 "reset": true, 00:10:17.448 "nvme_admin": false, 00:10:17.448 "nvme_io": false, 00:10:17.448 "nvme_io_md": false, 00:10:17.448 "write_zeroes": true, 00:10:17.448 "zcopy": false, 00:10:17.448 "get_zone_info": false, 00:10:17.448 "zone_management": false, 00:10:17.448 "zone_append": false, 00:10:17.448 "compare": false, 00:10:17.448 "compare_and_write": false, 00:10:17.448 "abort": false, 00:10:17.448 "seek_hole": false, 00:10:17.448 "seek_data": false, 00:10:17.448 "copy": false, 00:10:17.448 "nvme_iov_md": false 00:10:17.448 }, 00:10:17.448 "memory_domains": [ 00:10:17.448 { 00:10:17.448 "dma_device_id": "system", 00:10:17.448 "dma_device_type": 1 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.448 "dma_device_type": 2 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "system", 00:10:17.448 "dma_device_type": 1 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.448 "dma_device_type": 2 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "system", 00:10:17.448 "dma_device_type": 1 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.448 "dma_device_type": 2 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "system", 00:10:17.448 "dma_device_type": 1 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.448 "dma_device_type": 2 00:10:17.448 } 00:10:17.448 ], 00:10:17.448 "driver_specific": { 00:10:17.448 "raid": { 00:10:17.448 "uuid": "89278ef7-555c-44f1-915a-6599348bdc09", 00:10:17.448 "strip_size_kb": 64, 00:10:17.448 "state": "online", 00:10:17.448 "raid_level": "raid0", 00:10:17.448 "superblock": true, 00:10:17.448 "num_base_bdevs": 4, 00:10:17.448 "num_base_bdevs_discovered": 4, 00:10:17.448 "num_base_bdevs_operational": 4, 00:10:17.448 "base_bdevs_list": [ 00:10:17.448 { 00:10:17.448 "name": "NewBaseBdev", 00:10:17.448 "uuid": "5120709a-0109-45de-8f56-5643fefd7b81", 00:10:17.448 "is_configured": true, 00:10:17.448 "data_offset": 2048, 00:10:17.448 "data_size": 63488 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "name": "BaseBdev2", 00:10:17.448 "uuid": "dd5ff201-c735-4a54-a23f-520be17e1cfc", 00:10:17.448 "is_configured": true, 00:10:17.448 "data_offset": 2048, 00:10:17.448 "data_size": 63488 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "name": "BaseBdev3", 00:10:17.448 "uuid": "dd3db460-9327-42bc-a4fa-7cf0b96e78f9", 00:10:17.448 "is_configured": true, 00:10:17.448 "data_offset": 2048, 00:10:17.448 "data_size": 63488 00:10:17.448 }, 00:10:17.448 { 00:10:17.448 "name": "BaseBdev4", 00:10:17.448 "uuid": "444e2038-fd4b-4833-99ae-1ebbc6301088", 00:10:17.448 "is_configured": true, 00:10:17.448 "data_offset": 2048, 00:10:17.448 "data_size": 63488 00:10:17.448 } 00:10:17.448 ] 00:10:17.448 } 00:10:17.448 } 00:10:17.448 }' 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.448 BaseBdev2 00:10:17.448 BaseBdev3 00:10:17.448 BaseBdev4' 00:10:17.448 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.709 [2024-11-26 21:17:35.823119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.709 [2024-11-26 21:17:35.823151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.709 [2024-11-26 21:17:35.823229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.709 [2024-11-26 21:17:35.823296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.709 [2024-11-26 21:17:35.823307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69870 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69870 ']' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69870 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.709 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69870 00:10:17.969 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.969 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.969 killing process with pid 69870 00:10:17.969 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69870' 00:10:17.969 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69870 00:10:17.969 [2024-11-26 21:17:35.866944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.969 21:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69870 00:10:18.229 [2024-11-26 21:17:36.263948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.612 ************************************ 00:10:19.612 END TEST raid_state_function_test_sb 00:10:19.612 ************************************ 00:10:19.612 21:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.612 00:10:19.612 real 0m11.461s 00:10:19.612 user 0m18.196s 00:10:19.612 sys 0m2.098s 00:10:19.612 21:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.612 21:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.612 21:17:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:19.612 21:17:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.612 21:17:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.612 21:17:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.612 ************************************ 00:10:19.612 START TEST raid_superblock_test 00:10:19.612 ************************************ 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70540 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70540 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70540 ']' 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.612 21:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.612 [2024-11-26 21:17:37.539415] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:19.612 [2024-11-26 21:17:37.539535] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70540 ] 00:10:19.612 [2024-11-26 21:17:37.694188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.872 [2024-11-26 21:17:37.807115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.872 [2024-11-26 21:17:38.002641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.872 [2024-11-26 21:17:38.002705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.442 malloc1 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.442 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.443 [2024-11-26 21:17:38.415560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.443 [2024-11-26 21:17:38.415728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.443 [2024-11-26 21:17:38.415768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.443 [2024-11-26 21:17:38.415799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.443 [2024-11-26 21:17:38.417877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.443 [2024-11-26 21:17:38.417969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.443 pt1 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.443 malloc2 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.443 [2024-11-26 21:17:38.474610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.443 [2024-11-26 21:17:38.474664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.443 [2024-11-26 21:17:38.474704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.443 [2024-11-26 21:17:38.474713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.443 [2024-11-26 21:17:38.476708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.443 [2024-11-26 21:17:38.476803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.443 pt2 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.443 malloc3 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.443 [2024-11-26 21:17:38.542651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.443 [2024-11-26 21:17:38.542752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.443 [2024-11-26 21:17:38.542790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.443 [2024-11-26 21:17:38.542817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.443 [2024-11-26 21:17:38.544937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.443 [2024-11-26 21:17:38.545061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.443 pt3 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.443 malloc4 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.443 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.703 [2024-11-26 21:17:38.597611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.703 [2024-11-26 21:17:38.597738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.703 [2024-11-26 21:17:38.597776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.703 [2024-11-26 21:17:38.597804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.703 [2024-11-26 21:17:38.599806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.703 [2024-11-26 21:17:38.599906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.703 pt4 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.703 [2024-11-26 21:17:38.609622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.703 [2024-11-26 21:17:38.611367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.703 [2024-11-26 21:17:38.611449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.703 [2024-11-26 21:17:38.611494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.703 [2024-11-26 21:17:38.611656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.703 [2024-11-26 21:17:38.611667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.703 [2024-11-26 21:17:38.611933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:20.703 [2024-11-26 21:17:38.612133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.703 [2024-11-26 21:17:38.612147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.703 [2024-11-26 21:17:38.612280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.703 "name": "raid_bdev1", 00:10:20.703 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:20.703 "strip_size_kb": 64, 00:10:20.703 "state": "online", 00:10:20.703 "raid_level": "raid0", 00:10:20.703 "superblock": true, 00:10:20.703 "num_base_bdevs": 4, 00:10:20.703 "num_base_bdevs_discovered": 4, 00:10:20.703 "num_base_bdevs_operational": 4, 00:10:20.703 "base_bdevs_list": [ 00:10:20.703 { 00:10:20.703 "name": "pt1", 00:10:20.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.703 "is_configured": true, 00:10:20.703 "data_offset": 2048, 00:10:20.703 "data_size": 63488 00:10:20.703 }, 00:10:20.703 { 00:10:20.703 "name": "pt2", 00:10:20.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.703 "is_configured": true, 00:10:20.703 "data_offset": 2048, 00:10:20.703 "data_size": 63488 00:10:20.703 }, 00:10:20.703 { 00:10:20.703 "name": "pt3", 00:10:20.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.703 "is_configured": true, 00:10:20.703 "data_offset": 2048, 00:10:20.703 "data_size": 63488 00:10:20.703 }, 00:10:20.703 { 00:10:20.703 "name": "pt4", 00:10:20.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.703 "is_configured": true, 00:10:20.703 "data_offset": 2048, 00:10:20.703 "data_size": 63488 00:10:20.703 } 00:10:20.703 ] 00:10:20.703 }' 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.703 21:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.963 [2024-11-26 21:17:39.013192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.963 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.963 "name": "raid_bdev1", 00:10:20.963 "aliases": [ 00:10:20.963 "4cbc6250-ec9d-408e-aa07-c53348fed334" 00:10:20.963 ], 00:10:20.963 "product_name": "Raid Volume", 00:10:20.963 "block_size": 512, 00:10:20.963 "num_blocks": 253952, 00:10:20.963 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:20.963 "assigned_rate_limits": { 00:10:20.963 "rw_ios_per_sec": 0, 00:10:20.963 "rw_mbytes_per_sec": 0, 00:10:20.963 "r_mbytes_per_sec": 0, 00:10:20.963 "w_mbytes_per_sec": 0 00:10:20.963 }, 00:10:20.963 "claimed": false, 00:10:20.963 "zoned": false, 00:10:20.963 "supported_io_types": { 00:10:20.963 "read": true, 00:10:20.963 "write": true, 00:10:20.963 "unmap": true, 00:10:20.963 "flush": true, 00:10:20.963 "reset": true, 00:10:20.963 "nvme_admin": false, 00:10:20.963 "nvme_io": false, 00:10:20.963 "nvme_io_md": false, 00:10:20.963 "write_zeroes": true, 00:10:20.963 "zcopy": false, 00:10:20.963 "get_zone_info": false, 00:10:20.963 "zone_management": false, 00:10:20.963 "zone_append": false, 00:10:20.963 "compare": false, 00:10:20.963 "compare_and_write": false, 00:10:20.963 "abort": false, 00:10:20.963 "seek_hole": false, 00:10:20.963 "seek_data": false, 00:10:20.963 "copy": false, 00:10:20.963 "nvme_iov_md": false 00:10:20.963 }, 00:10:20.963 "memory_domains": [ 00:10:20.963 { 00:10:20.963 "dma_device_id": "system", 00:10:20.963 "dma_device_type": 1 00:10:20.963 }, 00:10:20.963 { 00:10:20.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.963 "dma_device_type": 2 00:10:20.963 }, 00:10:20.963 { 00:10:20.963 "dma_device_id": "system", 00:10:20.963 "dma_device_type": 1 00:10:20.963 }, 00:10:20.963 { 00:10:20.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.964 "dma_device_type": 2 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "dma_device_id": "system", 00:10:20.964 "dma_device_type": 1 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.964 "dma_device_type": 2 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "dma_device_id": "system", 00:10:20.964 "dma_device_type": 1 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.964 "dma_device_type": 2 00:10:20.964 } 00:10:20.964 ], 00:10:20.964 "driver_specific": { 00:10:20.964 "raid": { 00:10:20.964 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:20.964 "strip_size_kb": 64, 00:10:20.964 "state": "online", 00:10:20.964 "raid_level": "raid0", 00:10:20.964 "superblock": true, 00:10:20.964 "num_base_bdevs": 4, 00:10:20.964 "num_base_bdevs_discovered": 4, 00:10:20.964 "num_base_bdevs_operational": 4, 00:10:20.964 "base_bdevs_list": [ 00:10:20.964 { 00:10:20.964 "name": "pt1", 00:10:20.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.964 "is_configured": true, 00:10:20.964 "data_offset": 2048, 00:10:20.964 "data_size": 63488 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "name": "pt2", 00:10:20.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.964 "is_configured": true, 00:10:20.964 "data_offset": 2048, 00:10:20.964 "data_size": 63488 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "name": "pt3", 00:10:20.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.964 "is_configured": true, 00:10:20.964 "data_offset": 2048, 00:10:20.964 "data_size": 63488 00:10:20.964 }, 00:10:20.964 { 00:10:20.964 "name": "pt4", 00:10:20.964 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.964 "is_configured": true, 00:10:20.964 "data_offset": 2048, 00:10:20.964 "data_size": 63488 00:10:20.964 } 00:10:20.964 ] 00:10:20.964 } 00:10:20.964 } 00:10:20.964 }' 00:10:20.964 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.964 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.964 pt2 00:10:20.964 pt3 00:10:20.964 pt4' 00:10:20.964 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.964 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.964 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 [2024-11-26 21:17:39.320635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4cbc6250-ec9d-408e-aa07-c53348fed334 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4cbc6250-ec9d-408e-aa07-c53348fed334 ']' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 [2024-11-26 21:17:39.364271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.235 [2024-11-26 21:17:39.364342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.235 [2024-11-26 21:17:39.364455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.235 [2024-11-26 21:17:39.364540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.235 [2024-11-26 21:17:39.364598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:21.235 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.496 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 [2024-11-26 21:17:39.532042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.497 [2024-11-26 21:17:39.533915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.497 [2024-11-26 21:17:39.533960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:21.497 [2024-11-26 21:17:39.534001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:21.497 [2024-11-26 21:17:39.534047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.497 [2024-11-26 21:17:39.534088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.497 [2024-11-26 21:17:39.534122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:21.497 [2024-11-26 21:17:39.534141] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:21.497 [2024-11-26 21:17:39.534154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.497 [2024-11-26 21:17:39.534167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:21.497 request: 00:10:21.497 { 00:10:21.497 "name": "raid_bdev1", 00:10:21.497 "raid_level": "raid0", 00:10:21.497 "base_bdevs": [ 00:10:21.497 "malloc1", 00:10:21.497 "malloc2", 00:10:21.497 "malloc3", 00:10:21.497 "malloc4" 00:10:21.497 ], 00:10:21.497 "strip_size_kb": 64, 00:10:21.497 "superblock": false, 00:10:21.497 "method": "bdev_raid_create", 00:10:21.497 "req_id": 1 00:10:21.497 } 00:10:21.497 Got JSON-RPC error response 00:10:21.497 response: 00:10:21.497 { 00:10:21.497 "code": -17, 00:10:21.497 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.497 } 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 [2024-11-26 21:17:39.583919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.497 [2024-11-26 21:17:39.584017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.497 [2024-11-26 21:17:39.584050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:21.497 [2024-11-26 21:17:39.584117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.497 [2024-11-26 21:17:39.586188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.497 [2024-11-26 21:17:39.586259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.497 [2024-11-26 21:17:39.586368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.497 [2024-11-26 21:17:39.586445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.497 pt1 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.497 "name": "raid_bdev1", 00:10:21.497 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:21.497 "strip_size_kb": 64, 00:10:21.497 "state": "configuring", 00:10:21.497 "raid_level": "raid0", 00:10:21.497 "superblock": true, 00:10:21.497 "num_base_bdevs": 4, 00:10:21.497 "num_base_bdevs_discovered": 1, 00:10:21.497 "num_base_bdevs_operational": 4, 00:10:21.497 "base_bdevs_list": [ 00:10:21.497 { 00:10:21.497 "name": "pt1", 00:10:21.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.497 "is_configured": true, 00:10:21.497 "data_offset": 2048, 00:10:21.497 "data_size": 63488 00:10:21.497 }, 00:10:21.497 { 00:10:21.497 "name": null, 00:10:21.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.497 "is_configured": false, 00:10:21.497 "data_offset": 2048, 00:10:21.497 "data_size": 63488 00:10:21.497 }, 00:10:21.497 { 00:10:21.497 "name": null, 00:10:21.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.497 "is_configured": false, 00:10:21.497 "data_offset": 2048, 00:10:21.497 "data_size": 63488 00:10:21.497 }, 00:10:21.497 { 00:10:21.497 "name": null, 00:10:21.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.497 "is_configured": false, 00:10:21.497 "data_offset": 2048, 00:10:21.497 "data_size": 63488 00:10:21.497 } 00:10:21.497 ] 00:10:21.497 }' 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.497 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.068 [2024-11-26 21:17:39.971247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.068 [2024-11-26 21:17:39.971303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.068 [2024-11-26 21:17:39.971318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:22.068 [2024-11-26 21:17:39.971327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.068 [2024-11-26 21:17:39.971699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.068 [2024-11-26 21:17:39.971717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.068 [2024-11-26 21:17:39.971776] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.068 [2024-11-26 21:17:39.971797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.068 pt2 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.068 [2024-11-26 21:17:39.979250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.068 21:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.068 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.068 "name": "raid_bdev1", 00:10:22.068 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:22.068 "strip_size_kb": 64, 00:10:22.068 "state": "configuring", 00:10:22.068 "raid_level": "raid0", 00:10:22.068 "superblock": true, 00:10:22.068 "num_base_bdevs": 4, 00:10:22.068 "num_base_bdevs_discovered": 1, 00:10:22.068 "num_base_bdevs_operational": 4, 00:10:22.068 "base_bdevs_list": [ 00:10:22.068 { 00:10:22.068 "name": "pt1", 00:10:22.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.068 "is_configured": true, 00:10:22.068 "data_offset": 2048, 00:10:22.068 "data_size": 63488 00:10:22.068 }, 00:10:22.068 { 00:10:22.068 "name": null, 00:10:22.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.068 "is_configured": false, 00:10:22.068 "data_offset": 0, 00:10:22.068 "data_size": 63488 00:10:22.068 }, 00:10:22.068 { 00:10:22.068 "name": null, 00:10:22.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.068 "is_configured": false, 00:10:22.068 "data_offset": 2048, 00:10:22.068 "data_size": 63488 00:10:22.068 }, 00:10:22.068 { 00:10:22.068 "name": null, 00:10:22.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.068 "is_configured": false, 00:10:22.068 "data_offset": 2048, 00:10:22.068 "data_size": 63488 00:10:22.068 } 00:10:22.068 ] 00:10:22.068 }' 00:10:22.068 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.068 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.329 [2024-11-26 21:17:40.442471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.329 [2024-11-26 21:17:40.442541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.329 [2024-11-26 21:17:40.442562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:22.329 [2024-11-26 21:17:40.442571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.329 [2024-11-26 21:17:40.443024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.329 [2024-11-26 21:17:40.443043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.329 [2024-11-26 21:17:40.443124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.329 [2024-11-26 21:17:40.443145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.329 pt2 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.329 [2024-11-26 21:17:40.454417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.329 [2024-11-26 21:17:40.454465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.329 [2024-11-26 21:17:40.454497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:22.329 [2024-11-26 21:17:40.454505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.329 [2024-11-26 21:17:40.454872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.329 [2024-11-26 21:17:40.454888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.329 [2024-11-26 21:17:40.454946] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.329 [2024-11-26 21:17:40.454969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.329 pt3 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.329 [2024-11-26 21:17:40.466381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:22.329 [2024-11-26 21:17:40.466421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.329 [2024-11-26 21:17:40.466436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:22.329 [2024-11-26 21:17:40.466444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.329 [2024-11-26 21:17:40.466799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.329 [2024-11-26 21:17:40.466815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:22.329 [2024-11-26 21:17:40.466872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:22.329 [2024-11-26 21:17:40.466892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:22.329 [2024-11-26 21:17:40.467028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.329 [2024-11-26 21:17:40.467038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.329 [2024-11-26 21:17:40.467296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:22.329 [2024-11-26 21:17:40.467459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.329 [2024-11-26 21:17:40.467480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:22.329 [2024-11-26 21:17:40.467610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.329 pt4 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.329 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.589 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.589 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.589 "name": "raid_bdev1", 00:10:22.589 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:22.589 "strip_size_kb": 64, 00:10:22.589 "state": "online", 00:10:22.589 "raid_level": "raid0", 00:10:22.589 "superblock": true, 00:10:22.589 "num_base_bdevs": 4, 00:10:22.589 "num_base_bdevs_discovered": 4, 00:10:22.589 "num_base_bdevs_operational": 4, 00:10:22.589 "base_bdevs_list": [ 00:10:22.589 { 00:10:22.589 "name": "pt1", 00:10:22.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.589 "is_configured": true, 00:10:22.590 "data_offset": 2048, 00:10:22.590 "data_size": 63488 00:10:22.590 }, 00:10:22.590 { 00:10:22.590 "name": "pt2", 00:10:22.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.590 "is_configured": true, 00:10:22.590 "data_offset": 2048, 00:10:22.590 "data_size": 63488 00:10:22.590 }, 00:10:22.590 { 00:10:22.590 "name": "pt3", 00:10:22.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.590 "is_configured": true, 00:10:22.590 "data_offset": 2048, 00:10:22.590 "data_size": 63488 00:10:22.590 }, 00:10:22.590 { 00:10:22.590 "name": "pt4", 00:10:22.590 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.590 "is_configured": true, 00:10:22.590 "data_offset": 2048, 00:10:22.590 "data_size": 63488 00:10:22.590 } 00:10:22.590 ] 00:10:22.590 }' 00:10:22.590 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.590 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.850 [2024-11-26 21:17:40.929963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.850 "name": "raid_bdev1", 00:10:22.850 "aliases": [ 00:10:22.850 "4cbc6250-ec9d-408e-aa07-c53348fed334" 00:10:22.850 ], 00:10:22.850 "product_name": "Raid Volume", 00:10:22.850 "block_size": 512, 00:10:22.850 "num_blocks": 253952, 00:10:22.850 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:22.850 "assigned_rate_limits": { 00:10:22.850 "rw_ios_per_sec": 0, 00:10:22.850 "rw_mbytes_per_sec": 0, 00:10:22.850 "r_mbytes_per_sec": 0, 00:10:22.850 "w_mbytes_per_sec": 0 00:10:22.850 }, 00:10:22.850 "claimed": false, 00:10:22.850 "zoned": false, 00:10:22.850 "supported_io_types": { 00:10:22.850 "read": true, 00:10:22.850 "write": true, 00:10:22.850 "unmap": true, 00:10:22.850 "flush": true, 00:10:22.850 "reset": true, 00:10:22.850 "nvme_admin": false, 00:10:22.850 "nvme_io": false, 00:10:22.850 "nvme_io_md": false, 00:10:22.850 "write_zeroes": true, 00:10:22.850 "zcopy": false, 00:10:22.850 "get_zone_info": false, 00:10:22.850 "zone_management": false, 00:10:22.850 "zone_append": false, 00:10:22.850 "compare": false, 00:10:22.850 "compare_and_write": false, 00:10:22.850 "abort": false, 00:10:22.850 "seek_hole": false, 00:10:22.850 "seek_data": false, 00:10:22.850 "copy": false, 00:10:22.850 "nvme_iov_md": false 00:10:22.850 }, 00:10:22.850 "memory_domains": [ 00:10:22.850 { 00:10:22.850 "dma_device_id": "system", 00:10:22.850 "dma_device_type": 1 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.850 "dma_device_type": 2 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "system", 00:10:22.850 "dma_device_type": 1 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.850 "dma_device_type": 2 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "system", 00:10:22.850 "dma_device_type": 1 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.850 "dma_device_type": 2 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "system", 00:10:22.850 "dma_device_type": 1 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.850 "dma_device_type": 2 00:10:22.850 } 00:10:22.850 ], 00:10:22.850 "driver_specific": { 00:10:22.850 "raid": { 00:10:22.850 "uuid": "4cbc6250-ec9d-408e-aa07-c53348fed334", 00:10:22.850 "strip_size_kb": 64, 00:10:22.850 "state": "online", 00:10:22.850 "raid_level": "raid0", 00:10:22.850 "superblock": true, 00:10:22.850 "num_base_bdevs": 4, 00:10:22.850 "num_base_bdevs_discovered": 4, 00:10:22.850 "num_base_bdevs_operational": 4, 00:10:22.850 "base_bdevs_list": [ 00:10:22.850 { 00:10:22.850 "name": "pt1", 00:10:22.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "name": "pt2", 00:10:22.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "name": "pt3", 00:10:22.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 }, 00:10:22.850 { 00:10:22.850 "name": "pt4", 00:10:22.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.850 "is_configured": true, 00:10:22.850 "data_offset": 2048, 00:10:22.850 "data_size": 63488 00:10:22.850 } 00:10:22.850 ] 00:10:22.850 } 00:10:22.850 } 00:10:22.850 }' 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.850 pt2 00:10:22.850 pt3 00:10:22.850 pt4' 00:10:22.850 21:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:23.110 [2024-11-26 21:17:41.213416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.110 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4cbc6250-ec9d-408e-aa07-c53348fed334 '!=' 4cbc6250-ec9d-408e-aa07-c53348fed334 ']' 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70540 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70540 ']' 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70540 00:10:23.111 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:23.370 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.370 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70540 00:10:23.370 killing process with pid 70540 00:10:23.370 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.370 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.370 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70540' 00:10:23.370 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70540 00:10:23.370 [2024-11-26 21:17:41.289483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.370 [2024-11-26 21:17:41.289576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.371 21:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70540 00:10:23.371 [2024-11-26 21:17:41.289647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.371 [2024-11-26 21:17:41.289657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:23.630 [2024-11-26 21:17:41.674516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.013 ************************************ 00:10:25.013 END TEST raid_superblock_test 00:10:25.013 ************************************ 00:10:25.013 21:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:25.013 00:10:25.013 real 0m5.288s 00:10:25.013 user 0m7.520s 00:10:25.013 sys 0m0.953s 00:10:25.013 21:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.013 21:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.013 21:17:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:25.013 21:17:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.013 21:17:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.013 21:17:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.013 ************************************ 00:10:25.013 START TEST raid_read_error_test 00:10:25.013 ************************************ 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q1tayKCYZB 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70805 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70805 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70805 ']' 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.013 21:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.013 [2024-11-26 21:17:42.914266] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.013 [2024-11-26 21:17:42.914474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70805 ] 00:10:25.013 [2024-11-26 21:17:43.086513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.273 [2024-11-26 21:17:43.190930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.273 [2024-11-26 21:17:43.386129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.273 [2024-11-26 21:17:43.386183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 BaseBdev1_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 true 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 [2024-11-26 21:17:43.790189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.844 [2024-11-26 21:17:43.790285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.844 [2024-11-26 21:17:43.790337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.844 [2024-11-26 21:17:43.790367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.844 [2024-11-26 21:17:43.792385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.844 [2024-11-26 21:17:43.792463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.844 BaseBdev1 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 BaseBdev2_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 true 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 [2024-11-26 21:17:43.852066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.844 [2024-11-26 21:17:43.852172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.844 [2024-11-26 21:17:43.852204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.844 [2024-11-26 21:17:43.852233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.844 [2024-11-26 21:17:43.854215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.844 [2024-11-26 21:17:43.854284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.844 BaseBdev2 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 BaseBdev3_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 true 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 [2024-11-26 21:17:43.929701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.844 [2024-11-26 21:17:43.929754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.844 [2024-11-26 21:17:43.929771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.844 [2024-11-26 21:17:43.929781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.844 [2024-11-26 21:17:43.931800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.844 [2024-11-26 21:17:43.931838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.844 BaseBdev3 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 BaseBdev4_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 true 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.844 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.844 [2024-11-26 21:17:43.993100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:25.844 [2024-11-26 21:17:43.993190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.844 [2024-11-26 21:17:43.993240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.844 [2024-11-26 21:17:43.993270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.844 [2024-11-26 21:17:43.995259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.844 [2024-11-26 21:17:43.995333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:25.844 BaseBdev4 00:10:26.104 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.105 21:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:26.105 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.105 21:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.105 [2024-11-26 21:17:44.005169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.105 [2024-11-26 21:17:44.007035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.105 [2024-11-26 21:17:44.007165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.105 [2024-11-26 21:17:44.007248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.105 [2024-11-26 21:17:44.007490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:26.105 [2024-11-26 21:17:44.007542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.105 [2024-11-26 21:17:44.007819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:26.105 [2024-11-26 21:17:44.008043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:26.105 [2024-11-26 21:17:44.008088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:26.105 [2024-11-26 21:17:44.008296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.105 "name": "raid_bdev1", 00:10:26.105 "uuid": "df254538-212c-4e57-82a5-ad5ebfc0f82d", 00:10:26.105 "strip_size_kb": 64, 00:10:26.105 "state": "online", 00:10:26.105 "raid_level": "raid0", 00:10:26.105 "superblock": true, 00:10:26.105 "num_base_bdevs": 4, 00:10:26.105 "num_base_bdevs_discovered": 4, 00:10:26.105 "num_base_bdevs_operational": 4, 00:10:26.105 "base_bdevs_list": [ 00:10:26.105 { 00:10:26.105 "name": "BaseBdev1", 00:10:26.105 "uuid": "c0283219-1d87-569b-91cd-d5cd5cb648da", 00:10:26.105 "is_configured": true, 00:10:26.105 "data_offset": 2048, 00:10:26.105 "data_size": 63488 00:10:26.105 }, 00:10:26.105 { 00:10:26.105 "name": "BaseBdev2", 00:10:26.105 "uuid": "6c724cab-190f-50c6-be5a-6df927ae2015", 00:10:26.105 "is_configured": true, 00:10:26.105 "data_offset": 2048, 00:10:26.105 "data_size": 63488 00:10:26.105 }, 00:10:26.105 { 00:10:26.105 "name": "BaseBdev3", 00:10:26.105 "uuid": "05c1ffc6-c015-5f52-ab99-abfd4cb25165", 00:10:26.105 "is_configured": true, 00:10:26.105 "data_offset": 2048, 00:10:26.105 "data_size": 63488 00:10:26.105 }, 00:10:26.105 { 00:10:26.105 "name": "BaseBdev4", 00:10:26.105 "uuid": "8f3cb6e2-435e-54ec-84b5-60ab80ed5289", 00:10:26.105 "is_configured": true, 00:10:26.105 "data_offset": 2048, 00:10:26.105 "data_size": 63488 00:10:26.105 } 00:10:26.105 ] 00:10:26.105 }' 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.105 21:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.365 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:26.365 21:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.625 [2024-11-26 21:17:44.529581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.565 "name": "raid_bdev1", 00:10:27.565 "uuid": "df254538-212c-4e57-82a5-ad5ebfc0f82d", 00:10:27.565 "strip_size_kb": 64, 00:10:27.565 "state": "online", 00:10:27.565 "raid_level": "raid0", 00:10:27.565 "superblock": true, 00:10:27.565 "num_base_bdevs": 4, 00:10:27.565 "num_base_bdevs_discovered": 4, 00:10:27.565 "num_base_bdevs_operational": 4, 00:10:27.565 "base_bdevs_list": [ 00:10:27.565 { 00:10:27.565 "name": "BaseBdev1", 00:10:27.565 "uuid": "c0283219-1d87-569b-91cd-d5cd5cb648da", 00:10:27.565 "is_configured": true, 00:10:27.565 "data_offset": 2048, 00:10:27.565 "data_size": 63488 00:10:27.565 }, 00:10:27.565 { 00:10:27.565 "name": "BaseBdev2", 00:10:27.565 "uuid": "6c724cab-190f-50c6-be5a-6df927ae2015", 00:10:27.565 "is_configured": true, 00:10:27.565 "data_offset": 2048, 00:10:27.565 "data_size": 63488 00:10:27.565 }, 00:10:27.565 { 00:10:27.565 "name": "BaseBdev3", 00:10:27.565 "uuid": "05c1ffc6-c015-5f52-ab99-abfd4cb25165", 00:10:27.565 "is_configured": true, 00:10:27.565 "data_offset": 2048, 00:10:27.565 "data_size": 63488 00:10:27.565 }, 00:10:27.565 { 00:10:27.565 "name": "BaseBdev4", 00:10:27.565 "uuid": "8f3cb6e2-435e-54ec-84b5-60ab80ed5289", 00:10:27.565 "is_configured": true, 00:10:27.565 "data_offset": 2048, 00:10:27.565 "data_size": 63488 00:10:27.565 } 00:10:27.565 ] 00:10:27.565 }' 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.565 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.826 [2024-11-26 21:17:45.873248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.826 [2024-11-26 21:17:45.873351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.826 [2024-11-26 21:17:45.876420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.826 [2024-11-26 21:17:45.876520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.826 [2024-11-26 21:17:45.876584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.826 [2024-11-26 21:17:45.876649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:27.826 { 00:10:27.826 "results": [ 00:10:27.826 { 00:10:27.826 "job": "raid_bdev1", 00:10:27.826 "core_mask": "0x1", 00:10:27.826 "workload": "randrw", 00:10:27.826 "percentage": 50, 00:10:27.826 "status": "finished", 00:10:27.826 "queue_depth": 1, 00:10:27.826 "io_size": 131072, 00:10:27.826 "runtime": 1.344691, 00:10:27.826 "iops": 15724.058538355652, 00:10:27.826 "mibps": 1965.5073172944565, 00:10:27.826 "io_failed": 1, 00:10:27.826 "io_timeout": 0, 00:10:27.826 "avg_latency_us": 88.11155413700989, 00:10:27.826 "min_latency_us": 25.3764192139738, 00:10:27.826 "max_latency_us": 1359.3711790393013 00:10:27.826 } 00:10:27.826 ], 00:10:27.826 "core_count": 1 00:10:27.826 } 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70805 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70805 ']' 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70805 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70805 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70805' 00:10:27.826 killing process with pid 70805 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70805 00:10:27.826 [2024-11-26 21:17:45.918129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.826 21:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70805 00:10:28.086 [2024-11-26 21:17:46.228751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q1tayKCYZB 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:29.467 00:10:29.467 real 0m4.579s 00:10:29.467 user 0m5.373s 00:10:29.467 sys 0m0.575s 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.467 ************************************ 00:10:29.467 END TEST raid_read_error_test 00:10:29.467 ************************************ 00:10:29.467 21:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.467 21:17:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:29.467 21:17:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.467 21:17:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.467 21:17:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.467 ************************************ 00:10:29.467 START TEST raid_write_error_test 00:10:29.467 ************************************ 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HVlDg72zDH 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70945 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70945 00:10:29.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70945 ']' 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.467 21:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.467 [2024-11-26 21:17:47.567351] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:29.467 [2024-11-26 21:17:47.567480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70945 ] 00:10:29.726 [2024-11-26 21:17:47.738319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.726 [2024-11-26 21:17:47.849626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.986 [2024-11-26 21:17:48.035875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.986 [2024-11-26 21:17:48.035907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.245 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.245 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.245 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.245 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.245 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.245 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.505 BaseBdev1_malloc 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.505 true 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.505 [2024-11-26 21:17:48.451521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.505 [2024-11-26 21:17:48.451576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.505 [2024-11-26 21:17:48.451595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.505 [2024-11-26 21:17:48.451605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.505 [2024-11-26 21:17:48.453691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.505 [2024-11-26 21:17:48.453731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.505 BaseBdev1 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.505 BaseBdev2_malloc 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.505 true 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.505 [2024-11-26 21:17:48.518344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.505 [2024-11-26 21:17:48.518397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.505 [2024-11-26 21:17:48.518429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.505 [2024-11-26 21:17:48.518439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.505 [2024-11-26 21:17:48.520458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.505 [2024-11-26 21:17:48.520500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.505 BaseBdev2 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.505 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 BaseBdev3_malloc 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 true 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 [2024-11-26 21:17:48.596977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.506 [2024-11-26 21:17:48.597030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.506 [2024-11-26 21:17:48.597064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.506 [2024-11-26 21:17:48.597074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.506 [2024-11-26 21:17:48.599130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.506 [2024-11-26 21:17:48.599242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.506 BaseBdev3 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 BaseBdev4_malloc 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 true 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.506 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.770 [2024-11-26 21:17:48.660876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:30.770 [2024-11-26 21:17:48.660931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.770 [2024-11-26 21:17:48.660963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:30.770 [2024-11-26 21:17:48.660989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.770 [2024-11-26 21:17:48.662953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.770 [2024-11-26 21:17:48.663054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:30.770 BaseBdev4 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.770 [2024-11-26 21:17:48.672918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.770 [2024-11-26 21:17:48.674710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.770 [2024-11-26 21:17:48.674781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.770 [2024-11-26 21:17:48.674851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.770 [2024-11-26 21:17:48.675066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:30.770 [2024-11-26 21:17:48.675082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.770 [2024-11-26 21:17:48.675326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:30.770 [2024-11-26 21:17:48.675480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:30.770 [2024-11-26 21:17:48.675491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:30.770 [2024-11-26 21:17:48.675630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.770 "name": "raid_bdev1", 00:10:30.770 "uuid": "0ca85904-42a3-4415-9ab5-5d2b3374017a", 00:10:30.770 "strip_size_kb": 64, 00:10:30.770 "state": "online", 00:10:30.770 "raid_level": "raid0", 00:10:30.770 "superblock": true, 00:10:30.770 "num_base_bdevs": 4, 00:10:30.770 "num_base_bdevs_discovered": 4, 00:10:30.770 "num_base_bdevs_operational": 4, 00:10:30.770 "base_bdevs_list": [ 00:10:30.770 { 00:10:30.770 "name": "BaseBdev1", 00:10:30.770 "uuid": "eb7aee77-7328-53fb-afce-308f2fb34688", 00:10:30.770 "is_configured": true, 00:10:30.770 "data_offset": 2048, 00:10:30.770 "data_size": 63488 00:10:30.770 }, 00:10:30.770 { 00:10:30.770 "name": "BaseBdev2", 00:10:30.770 "uuid": "da031f85-c37c-52c6-b5ed-2af1e25dea78", 00:10:30.770 "is_configured": true, 00:10:30.770 "data_offset": 2048, 00:10:30.770 "data_size": 63488 00:10:30.770 }, 00:10:30.770 { 00:10:30.770 "name": "BaseBdev3", 00:10:30.770 "uuid": "f56a81e9-6f2a-5081-a760-eaf6626a5f0a", 00:10:30.770 "is_configured": true, 00:10:30.770 "data_offset": 2048, 00:10:30.770 "data_size": 63488 00:10:30.770 }, 00:10:30.770 { 00:10:30.770 "name": "BaseBdev4", 00:10:30.770 "uuid": "307435ee-c721-58b6-8e1e-ed550133cc50", 00:10:30.770 "is_configured": true, 00:10:30.770 "data_offset": 2048, 00:10:30.770 "data_size": 63488 00:10:30.770 } 00:10:30.770 ] 00:10:30.770 }' 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.770 21:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.030 21:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.030 21:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.288 [2024-11-26 21:17:49.221340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.228 "name": "raid_bdev1", 00:10:32.228 "uuid": "0ca85904-42a3-4415-9ab5-5d2b3374017a", 00:10:32.228 "strip_size_kb": 64, 00:10:32.228 "state": "online", 00:10:32.228 "raid_level": "raid0", 00:10:32.228 "superblock": true, 00:10:32.228 "num_base_bdevs": 4, 00:10:32.228 "num_base_bdevs_discovered": 4, 00:10:32.228 "num_base_bdevs_operational": 4, 00:10:32.228 "base_bdevs_list": [ 00:10:32.228 { 00:10:32.228 "name": "BaseBdev1", 00:10:32.228 "uuid": "eb7aee77-7328-53fb-afce-308f2fb34688", 00:10:32.228 "is_configured": true, 00:10:32.228 "data_offset": 2048, 00:10:32.228 "data_size": 63488 00:10:32.228 }, 00:10:32.228 { 00:10:32.228 "name": "BaseBdev2", 00:10:32.228 "uuid": "da031f85-c37c-52c6-b5ed-2af1e25dea78", 00:10:32.228 "is_configured": true, 00:10:32.228 "data_offset": 2048, 00:10:32.228 "data_size": 63488 00:10:32.228 }, 00:10:32.228 { 00:10:32.228 "name": "BaseBdev3", 00:10:32.228 "uuid": "f56a81e9-6f2a-5081-a760-eaf6626a5f0a", 00:10:32.228 "is_configured": true, 00:10:32.228 "data_offset": 2048, 00:10:32.228 "data_size": 63488 00:10:32.228 }, 00:10:32.228 { 00:10:32.228 "name": "BaseBdev4", 00:10:32.228 "uuid": "307435ee-c721-58b6-8e1e-ed550133cc50", 00:10:32.228 "is_configured": true, 00:10:32.228 "data_offset": 2048, 00:10:32.228 "data_size": 63488 00:10:32.228 } 00:10:32.228 ] 00:10:32.228 }' 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.228 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.488 [2024-11-26 21:17:50.571464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.488 [2024-11-26 21:17:50.571557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.488 [2024-11-26 21:17:50.574397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.488 [2024-11-26 21:17:50.574509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.488 [2024-11-26 21:17:50.574571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.488 [2024-11-26 21:17:50.574630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:32.488 { 00:10:32.488 "results": [ 00:10:32.488 { 00:10:32.488 "job": "raid_bdev1", 00:10:32.488 "core_mask": "0x1", 00:10:32.488 "workload": "randrw", 00:10:32.488 "percentage": 50, 00:10:32.488 "status": "finished", 00:10:32.488 "queue_depth": 1, 00:10:32.488 "io_size": 131072, 00:10:32.488 "runtime": 1.35106, 00:10:32.488 "iops": 15774.28093496958, 00:10:32.488 "mibps": 1971.7851168711975, 00:10:32.488 "io_failed": 1, 00:10:32.488 "io_timeout": 0, 00:10:32.488 "avg_latency_us": 87.85967487707136, 00:10:32.488 "min_latency_us": 25.3764192139738, 00:10:32.488 "max_latency_us": 1438.071615720524 00:10:32.488 } 00:10:32.488 ], 00:10:32.488 "core_count": 1 00:10:32.488 } 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70945 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70945 ']' 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70945 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70945 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70945' 00:10:32.488 killing process with pid 70945 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70945 00:10:32.488 [2024-11-26 21:17:50.617727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.488 21:17:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70945 00:10:33.058 [2024-11-26 21:17:50.934231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HVlDg72zDH 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:33.997 00:10:33.997 real 0m4.620s 00:10:33.997 user 0m5.456s 00:10:33.997 sys 0m0.568s 00:10:33.997 ************************************ 00:10:33.997 END TEST raid_write_error_test 00:10:33.997 ************************************ 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.997 21:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.997 21:17:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:33.997 21:17:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:33.997 21:17:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.997 21:17:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.997 21:17:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.997 ************************************ 00:10:34.258 START TEST raid_state_function_test 00:10:34.258 ************************************ 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71089 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71089' 00:10:34.258 Process raid pid: 71089 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71089 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71089 ']' 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.258 21:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.258 [2024-11-26 21:17:52.259145] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:34.258 [2024-11-26 21:17:52.259374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.518 [2024-11-26 21:17:52.434799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.518 [2024-11-26 21:17:52.547363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.778 [2024-11-26 21:17:52.747971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.778 [2024-11-26 21:17:52.748096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.039 [2024-11-26 21:17:53.075862] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.039 [2024-11-26 21:17:53.075975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.039 [2024-11-26 21:17:53.075990] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.039 [2024-11-26 21:17:53.076001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.039 [2024-11-26 21:17:53.076007] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.039 [2024-11-26 21:17:53.076016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.039 [2024-11-26 21:17:53.076022] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.039 [2024-11-26 21:17:53.076030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.039 "name": "Existed_Raid", 00:10:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.039 "strip_size_kb": 64, 00:10:35.039 "state": "configuring", 00:10:35.039 "raid_level": "concat", 00:10:35.039 "superblock": false, 00:10:35.039 "num_base_bdevs": 4, 00:10:35.039 "num_base_bdevs_discovered": 0, 00:10:35.039 "num_base_bdevs_operational": 4, 00:10:35.039 "base_bdevs_list": [ 00:10:35.039 { 00:10:35.039 "name": "BaseBdev1", 00:10:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.039 "is_configured": false, 00:10:35.039 "data_offset": 0, 00:10:35.039 "data_size": 0 00:10:35.039 }, 00:10:35.039 { 00:10:35.039 "name": "BaseBdev2", 00:10:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.039 "is_configured": false, 00:10:35.039 "data_offset": 0, 00:10:35.039 "data_size": 0 00:10:35.039 }, 00:10:35.039 { 00:10:35.039 "name": "BaseBdev3", 00:10:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.039 "is_configured": false, 00:10:35.039 "data_offset": 0, 00:10:35.039 "data_size": 0 00:10:35.039 }, 00:10:35.039 { 00:10:35.039 "name": "BaseBdev4", 00:10:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.039 "is_configured": false, 00:10:35.039 "data_offset": 0, 00:10:35.039 "data_size": 0 00:10:35.039 } 00:10:35.039 ] 00:10:35.039 }' 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.039 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 [2024-11-26 21:17:53.503126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.608 [2024-11-26 21:17:53.503234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 [2024-11-26 21:17:53.515081] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.608 [2024-11-26 21:17:53.515172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.608 [2024-11-26 21:17:53.515200] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.608 [2024-11-26 21:17:53.515223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.608 [2024-11-26 21:17:53.515242] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.608 [2024-11-26 21:17:53.515263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.608 [2024-11-26 21:17:53.515281] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.608 [2024-11-26 21:17:53.515311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 [2024-11-26 21:17:53.561753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.608 BaseBdev1 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 [ 00:10:35.608 { 00:10:35.608 "name": "BaseBdev1", 00:10:35.608 "aliases": [ 00:10:35.608 "6b6120e6-e741-4ad7-985f-e525afa7486c" 00:10:35.608 ], 00:10:35.608 "product_name": "Malloc disk", 00:10:35.608 "block_size": 512, 00:10:35.608 "num_blocks": 65536, 00:10:35.608 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:35.608 "assigned_rate_limits": { 00:10:35.608 "rw_ios_per_sec": 0, 00:10:35.608 "rw_mbytes_per_sec": 0, 00:10:35.608 "r_mbytes_per_sec": 0, 00:10:35.608 "w_mbytes_per_sec": 0 00:10:35.608 }, 00:10:35.608 "claimed": true, 00:10:35.608 "claim_type": "exclusive_write", 00:10:35.608 "zoned": false, 00:10:35.608 "supported_io_types": { 00:10:35.608 "read": true, 00:10:35.608 "write": true, 00:10:35.608 "unmap": true, 00:10:35.608 "flush": true, 00:10:35.608 "reset": true, 00:10:35.608 "nvme_admin": false, 00:10:35.608 "nvme_io": false, 00:10:35.608 "nvme_io_md": false, 00:10:35.608 "write_zeroes": true, 00:10:35.608 "zcopy": true, 00:10:35.608 "get_zone_info": false, 00:10:35.608 "zone_management": false, 00:10:35.608 "zone_append": false, 00:10:35.608 "compare": false, 00:10:35.608 "compare_and_write": false, 00:10:35.608 "abort": true, 00:10:35.608 "seek_hole": false, 00:10:35.608 "seek_data": false, 00:10:35.608 "copy": true, 00:10:35.608 "nvme_iov_md": false 00:10:35.608 }, 00:10:35.608 "memory_domains": [ 00:10:35.608 { 00:10:35.608 "dma_device_id": "system", 00:10:35.608 "dma_device_type": 1 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.608 "dma_device_type": 2 00:10:35.608 } 00:10:35.608 ], 00:10:35.608 "driver_specific": {} 00:10:35.608 } 00:10:35.608 ] 00:10:35.608 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.609 "name": "Existed_Raid", 00:10:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.609 "strip_size_kb": 64, 00:10:35.609 "state": "configuring", 00:10:35.609 "raid_level": "concat", 00:10:35.609 "superblock": false, 00:10:35.609 "num_base_bdevs": 4, 00:10:35.609 "num_base_bdevs_discovered": 1, 00:10:35.609 "num_base_bdevs_operational": 4, 00:10:35.609 "base_bdevs_list": [ 00:10:35.609 { 00:10:35.609 "name": "BaseBdev1", 00:10:35.609 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:35.609 "is_configured": true, 00:10:35.609 "data_offset": 0, 00:10:35.609 "data_size": 65536 00:10:35.609 }, 00:10:35.609 { 00:10:35.609 "name": "BaseBdev2", 00:10:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.609 "is_configured": false, 00:10:35.609 "data_offset": 0, 00:10:35.609 "data_size": 0 00:10:35.609 }, 00:10:35.609 { 00:10:35.609 "name": "BaseBdev3", 00:10:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.609 "is_configured": false, 00:10:35.609 "data_offset": 0, 00:10:35.609 "data_size": 0 00:10:35.609 }, 00:10:35.609 { 00:10:35.609 "name": "BaseBdev4", 00:10:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.609 "is_configured": false, 00:10:35.609 "data_offset": 0, 00:10:35.609 "data_size": 0 00:10:35.609 } 00:10:35.609 ] 00:10:35.609 }' 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.609 21:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.176 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.176 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.176 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.176 [2024-11-26 21:17:54.065043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.177 [2024-11-26 21:17:54.065101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.177 [2024-11-26 21:17:54.077082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.177 [2024-11-26 21:17:54.078849] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.177 [2024-11-26 21:17:54.078892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.177 [2024-11-26 21:17:54.078902] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.177 [2024-11-26 21:17:54.078912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.177 [2024-11-26 21:17:54.078919] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.177 [2024-11-26 21:17:54.078926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.177 "name": "Existed_Raid", 00:10:36.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.177 "strip_size_kb": 64, 00:10:36.177 "state": "configuring", 00:10:36.177 "raid_level": "concat", 00:10:36.177 "superblock": false, 00:10:36.177 "num_base_bdevs": 4, 00:10:36.177 "num_base_bdevs_discovered": 1, 00:10:36.177 "num_base_bdevs_operational": 4, 00:10:36.177 "base_bdevs_list": [ 00:10:36.177 { 00:10:36.177 "name": "BaseBdev1", 00:10:36.177 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:36.177 "is_configured": true, 00:10:36.177 "data_offset": 0, 00:10:36.177 "data_size": 65536 00:10:36.177 }, 00:10:36.177 { 00:10:36.177 "name": "BaseBdev2", 00:10:36.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.177 "is_configured": false, 00:10:36.177 "data_offset": 0, 00:10:36.177 "data_size": 0 00:10:36.177 }, 00:10:36.177 { 00:10:36.177 "name": "BaseBdev3", 00:10:36.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.177 "is_configured": false, 00:10:36.177 "data_offset": 0, 00:10:36.177 "data_size": 0 00:10:36.177 }, 00:10:36.177 { 00:10:36.177 "name": "BaseBdev4", 00:10:36.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.177 "is_configured": false, 00:10:36.177 "data_offset": 0, 00:10:36.177 "data_size": 0 00:10:36.177 } 00:10:36.177 ] 00:10:36.177 }' 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.177 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 [2024-11-26 21:17:54.509882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.436 BaseBdev2 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 [ 00:10:36.436 { 00:10:36.436 "name": "BaseBdev2", 00:10:36.436 "aliases": [ 00:10:36.436 "7fe2cbd5-3930-4a9d-8505-60977f78192e" 00:10:36.436 ], 00:10:36.436 "product_name": "Malloc disk", 00:10:36.436 "block_size": 512, 00:10:36.436 "num_blocks": 65536, 00:10:36.436 "uuid": "7fe2cbd5-3930-4a9d-8505-60977f78192e", 00:10:36.436 "assigned_rate_limits": { 00:10:36.436 "rw_ios_per_sec": 0, 00:10:36.436 "rw_mbytes_per_sec": 0, 00:10:36.436 "r_mbytes_per_sec": 0, 00:10:36.436 "w_mbytes_per_sec": 0 00:10:36.436 }, 00:10:36.436 "claimed": true, 00:10:36.436 "claim_type": "exclusive_write", 00:10:36.436 "zoned": false, 00:10:36.436 "supported_io_types": { 00:10:36.436 "read": true, 00:10:36.436 "write": true, 00:10:36.436 "unmap": true, 00:10:36.436 "flush": true, 00:10:36.436 "reset": true, 00:10:36.436 "nvme_admin": false, 00:10:36.436 "nvme_io": false, 00:10:36.436 "nvme_io_md": false, 00:10:36.436 "write_zeroes": true, 00:10:36.436 "zcopy": true, 00:10:36.436 "get_zone_info": false, 00:10:36.436 "zone_management": false, 00:10:36.436 "zone_append": false, 00:10:36.436 "compare": false, 00:10:36.436 "compare_and_write": false, 00:10:36.436 "abort": true, 00:10:36.436 "seek_hole": false, 00:10:36.436 "seek_data": false, 00:10:36.436 "copy": true, 00:10:36.436 "nvme_iov_md": false 00:10:36.436 }, 00:10:36.436 "memory_domains": [ 00:10:36.436 { 00:10:36.436 "dma_device_id": "system", 00:10:36.436 "dma_device_type": 1 00:10:36.436 }, 00:10:36.436 { 00:10:36.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.436 "dma_device_type": 2 00:10:36.436 } 00:10:36.436 ], 00:10:36.436 "driver_specific": {} 00:10:36.436 } 00:10:36.436 ] 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.739 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.739 "name": "Existed_Raid", 00:10:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.739 "strip_size_kb": 64, 00:10:36.739 "state": "configuring", 00:10:36.739 "raid_level": "concat", 00:10:36.739 "superblock": false, 00:10:36.739 "num_base_bdevs": 4, 00:10:36.739 "num_base_bdevs_discovered": 2, 00:10:36.739 "num_base_bdevs_operational": 4, 00:10:36.739 "base_bdevs_list": [ 00:10:36.739 { 00:10:36.739 "name": "BaseBdev1", 00:10:36.739 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:36.739 "is_configured": true, 00:10:36.739 "data_offset": 0, 00:10:36.739 "data_size": 65536 00:10:36.739 }, 00:10:36.739 { 00:10:36.739 "name": "BaseBdev2", 00:10:36.739 "uuid": "7fe2cbd5-3930-4a9d-8505-60977f78192e", 00:10:36.739 "is_configured": true, 00:10:36.739 "data_offset": 0, 00:10:36.739 "data_size": 65536 00:10:36.739 }, 00:10:36.739 { 00:10:36.739 "name": "BaseBdev3", 00:10:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.739 "is_configured": false, 00:10:36.739 "data_offset": 0, 00:10:36.739 "data_size": 0 00:10:36.739 }, 00:10:36.739 { 00:10:36.739 "name": "BaseBdev4", 00:10:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.739 "is_configured": false, 00:10:36.739 "data_offset": 0, 00:10:36.739 "data_size": 0 00:10:36.739 } 00:10:36.739 ] 00:10:36.739 }' 00:10:36.739 21:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.739 21:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.998 [2024-11-26 21:17:55.087325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.998 BaseBdev3 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.998 [ 00:10:36.998 { 00:10:36.998 "name": "BaseBdev3", 00:10:36.998 "aliases": [ 00:10:36.998 "92e90838-45a6-4240-9e3b-e4b5bc4dc352" 00:10:36.998 ], 00:10:36.998 "product_name": "Malloc disk", 00:10:36.998 "block_size": 512, 00:10:36.998 "num_blocks": 65536, 00:10:36.998 "uuid": "92e90838-45a6-4240-9e3b-e4b5bc4dc352", 00:10:36.998 "assigned_rate_limits": { 00:10:36.998 "rw_ios_per_sec": 0, 00:10:36.998 "rw_mbytes_per_sec": 0, 00:10:36.998 "r_mbytes_per_sec": 0, 00:10:36.998 "w_mbytes_per_sec": 0 00:10:36.998 }, 00:10:36.998 "claimed": true, 00:10:36.998 "claim_type": "exclusive_write", 00:10:36.998 "zoned": false, 00:10:36.998 "supported_io_types": { 00:10:36.998 "read": true, 00:10:36.998 "write": true, 00:10:36.998 "unmap": true, 00:10:36.998 "flush": true, 00:10:36.998 "reset": true, 00:10:36.998 "nvme_admin": false, 00:10:36.998 "nvme_io": false, 00:10:36.998 "nvme_io_md": false, 00:10:36.998 "write_zeroes": true, 00:10:36.998 "zcopy": true, 00:10:36.998 "get_zone_info": false, 00:10:36.998 "zone_management": false, 00:10:36.998 "zone_append": false, 00:10:36.998 "compare": false, 00:10:36.998 "compare_and_write": false, 00:10:36.998 "abort": true, 00:10:36.998 "seek_hole": false, 00:10:36.998 "seek_data": false, 00:10:36.998 "copy": true, 00:10:36.998 "nvme_iov_md": false 00:10:36.998 }, 00:10:36.998 "memory_domains": [ 00:10:36.998 { 00:10:36.998 "dma_device_id": "system", 00:10:36.998 "dma_device_type": 1 00:10:36.998 }, 00:10:36.998 { 00:10:36.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.998 "dma_device_type": 2 00:10:36.998 } 00:10:36.998 ], 00:10:36.998 "driver_specific": {} 00:10:36.998 } 00:10:36.998 ] 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.998 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.999 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.256 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.256 "name": "Existed_Raid", 00:10:37.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.256 "strip_size_kb": 64, 00:10:37.256 "state": "configuring", 00:10:37.256 "raid_level": "concat", 00:10:37.256 "superblock": false, 00:10:37.256 "num_base_bdevs": 4, 00:10:37.256 "num_base_bdevs_discovered": 3, 00:10:37.256 "num_base_bdevs_operational": 4, 00:10:37.256 "base_bdevs_list": [ 00:10:37.256 { 00:10:37.256 "name": "BaseBdev1", 00:10:37.256 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:37.256 "is_configured": true, 00:10:37.256 "data_offset": 0, 00:10:37.256 "data_size": 65536 00:10:37.256 }, 00:10:37.256 { 00:10:37.256 "name": "BaseBdev2", 00:10:37.256 "uuid": "7fe2cbd5-3930-4a9d-8505-60977f78192e", 00:10:37.256 "is_configured": true, 00:10:37.256 "data_offset": 0, 00:10:37.256 "data_size": 65536 00:10:37.256 }, 00:10:37.256 { 00:10:37.256 "name": "BaseBdev3", 00:10:37.256 "uuid": "92e90838-45a6-4240-9e3b-e4b5bc4dc352", 00:10:37.256 "is_configured": true, 00:10:37.256 "data_offset": 0, 00:10:37.256 "data_size": 65536 00:10:37.256 }, 00:10:37.256 { 00:10:37.256 "name": "BaseBdev4", 00:10:37.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.256 "is_configured": false, 00:10:37.256 "data_offset": 0, 00:10:37.256 "data_size": 0 00:10:37.256 } 00:10:37.256 ] 00:10:37.256 }' 00:10:37.256 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.256 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.514 [2024-11-26 21:17:55.599786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.514 [2024-11-26 21:17:55.599838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:37.514 [2024-11-26 21:17:55.599846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:37.514 [2024-11-26 21:17:55.600166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:37.514 [2024-11-26 21:17:55.600330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:37.514 [2024-11-26 21:17:55.600347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:37.514 [2024-11-26 21:17:55.600600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.514 BaseBdev4 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.514 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.514 [ 00:10:37.514 { 00:10:37.514 "name": "BaseBdev4", 00:10:37.514 "aliases": [ 00:10:37.514 "46a6d95d-3169-4597-9b34-02608c35e0f9" 00:10:37.514 ], 00:10:37.514 "product_name": "Malloc disk", 00:10:37.514 "block_size": 512, 00:10:37.514 "num_blocks": 65536, 00:10:37.514 "uuid": "46a6d95d-3169-4597-9b34-02608c35e0f9", 00:10:37.514 "assigned_rate_limits": { 00:10:37.514 "rw_ios_per_sec": 0, 00:10:37.514 "rw_mbytes_per_sec": 0, 00:10:37.514 "r_mbytes_per_sec": 0, 00:10:37.514 "w_mbytes_per_sec": 0 00:10:37.514 }, 00:10:37.514 "claimed": true, 00:10:37.514 "claim_type": "exclusive_write", 00:10:37.514 "zoned": false, 00:10:37.514 "supported_io_types": { 00:10:37.514 "read": true, 00:10:37.514 "write": true, 00:10:37.514 "unmap": true, 00:10:37.514 "flush": true, 00:10:37.514 "reset": true, 00:10:37.514 "nvme_admin": false, 00:10:37.514 "nvme_io": false, 00:10:37.514 "nvme_io_md": false, 00:10:37.514 "write_zeroes": true, 00:10:37.514 "zcopy": true, 00:10:37.514 "get_zone_info": false, 00:10:37.514 "zone_management": false, 00:10:37.514 "zone_append": false, 00:10:37.514 "compare": false, 00:10:37.514 "compare_and_write": false, 00:10:37.514 "abort": true, 00:10:37.514 "seek_hole": false, 00:10:37.514 "seek_data": false, 00:10:37.514 "copy": true, 00:10:37.515 "nvme_iov_md": false 00:10:37.515 }, 00:10:37.515 "memory_domains": [ 00:10:37.515 { 00:10:37.515 "dma_device_id": "system", 00:10:37.515 "dma_device_type": 1 00:10:37.515 }, 00:10:37.515 { 00:10:37.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.515 "dma_device_type": 2 00:10:37.515 } 00:10:37.515 ], 00:10:37.515 "driver_specific": {} 00:10:37.515 } 00:10:37.515 ] 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.515 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.773 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.773 "name": "Existed_Raid", 00:10:37.773 "uuid": "7001ad04-ba39-4c5f-8469-95f7925fcbd3", 00:10:37.773 "strip_size_kb": 64, 00:10:37.773 "state": "online", 00:10:37.773 "raid_level": "concat", 00:10:37.773 "superblock": false, 00:10:37.773 "num_base_bdevs": 4, 00:10:37.773 "num_base_bdevs_discovered": 4, 00:10:37.773 "num_base_bdevs_operational": 4, 00:10:37.773 "base_bdevs_list": [ 00:10:37.773 { 00:10:37.773 "name": "BaseBdev1", 00:10:37.773 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:37.773 "is_configured": true, 00:10:37.773 "data_offset": 0, 00:10:37.773 "data_size": 65536 00:10:37.773 }, 00:10:37.773 { 00:10:37.773 "name": "BaseBdev2", 00:10:37.773 "uuid": "7fe2cbd5-3930-4a9d-8505-60977f78192e", 00:10:37.773 "is_configured": true, 00:10:37.773 "data_offset": 0, 00:10:37.773 "data_size": 65536 00:10:37.773 }, 00:10:37.773 { 00:10:37.773 "name": "BaseBdev3", 00:10:37.773 "uuid": "92e90838-45a6-4240-9e3b-e4b5bc4dc352", 00:10:37.773 "is_configured": true, 00:10:37.773 "data_offset": 0, 00:10:37.773 "data_size": 65536 00:10:37.773 }, 00:10:37.773 { 00:10:37.773 "name": "BaseBdev4", 00:10:37.773 "uuid": "46a6d95d-3169-4597-9b34-02608c35e0f9", 00:10:37.773 "is_configured": true, 00:10:37.773 "data_offset": 0, 00:10:37.774 "data_size": 65536 00:10:37.774 } 00:10:37.774 ] 00:10:37.774 }' 00:10:37.774 21:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.774 21:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.032 [2024-11-26 21:17:56.099325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.032 "name": "Existed_Raid", 00:10:38.032 "aliases": [ 00:10:38.032 "7001ad04-ba39-4c5f-8469-95f7925fcbd3" 00:10:38.032 ], 00:10:38.032 "product_name": "Raid Volume", 00:10:38.032 "block_size": 512, 00:10:38.032 "num_blocks": 262144, 00:10:38.032 "uuid": "7001ad04-ba39-4c5f-8469-95f7925fcbd3", 00:10:38.032 "assigned_rate_limits": { 00:10:38.032 "rw_ios_per_sec": 0, 00:10:38.032 "rw_mbytes_per_sec": 0, 00:10:38.032 "r_mbytes_per_sec": 0, 00:10:38.032 "w_mbytes_per_sec": 0 00:10:38.032 }, 00:10:38.032 "claimed": false, 00:10:38.032 "zoned": false, 00:10:38.032 "supported_io_types": { 00:10:38.032 "read": true, 00:10:38.032 "write": true, 00:10:38.032 "unmap": true, 00:10:38.032 "flush": true, 00:10:38.032 "reset": true, 00:10:38.032 "nvme_admin": false, 00:10:38.032 "nvme_io": false, 00:10:38.032 "nvme_io_md": false, 00:10:38.032 "write_zeroes": true, 00:10:38.032 "zcopy": false, 00:10:38.032 "get_zone_info": false, 00:10:38.032 "zone_management": false, 00:10:38.032 "zone_append": false, 00:10:38.032 "compare": false, 00:10:38.032 "compare_and_write": false, 00:10:38.032 "abort": false, 00:10:38.032 "seek_hole": false, 00:10:38.032 "seek_data": false, 00:10:38.032 "copy": false, 00:10:38.032 "nvme_iov_md": false 00:10:38.032 }, 00:10:38.032 "memory_domains": [ 00:10:38.032 { 00:10:38.032 "dma_device_id": "system", 00:10:38.032 "dma_device_type": 1 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.032 "dma_device_type": 2 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "system", 00:10:38.032 "dma_device_type": 1 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.032 "dma_device_type": 2 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "system", 00:10:38.032 "dma_device_type": 1 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.032 "dma_device_type": 2 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "system", 00:10:38.032 "dma_device_type": 1 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.032 "dma_device_type": 2 00:10:38.032 } 00:10:38.032 ], 00:10:38.032 "driver_specific": { 00:10:38.032 "raid": { 00:10:38.032 "uuid": "7001ad04-ba39-4c5f-8469-95f7925fcbd3", 00:10:38.032 "strip_size_kb": 64, 00:10:38.032 "state": "online", 00:10:38.032 "raid_level": "concat", 00:10:38.032 "superblock": false, 00:10:38.032 "num_base_bdevs": 4, 00:10:38.032 "num_base_bdevs_discovered": 4, 00:10:38.032 "num_base_bdevs_operational": 4, 00:10:38.032 "base_bdevs_list": [ 00:10:38.032 { 00:10:38.032 "name": "BaseBdev1", 00:10:38.032 "uuid": "6b6120e6-e741-4ad7-985f-e525afa7486c", 00:10:38.032 "is_configured": true, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 65536 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "name": "BaseBdev2", 00:10:38.032 "uuid": "7fe2cbd5-3930-4a9d-8505-60977f78192e", 00:10:38.032 "is_configured": true, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 65536 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "name": "BaseBdev3", 00:10:38.032 "uuid": "92e90838-45a6-4240-9e3b-e4b5bc4dc352", 00:10:38.032 "is_configured": true, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 65536 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "name": "BaseBdev4", 00:10:38.032 "uuid": "46a6d95d-3169-4597-9b34-02608c35e0f9", 00:10:38.032 "is_configured": true, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 65536 00:10:38.032 } 00:10:38.032 ] 00:10:38.032 } 00:10:38.032 } 00:10:38.032 }' 00:10:38.032 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:38.291 BaseBdev2 00:10:38.291 BaseBdev3 00:10:38.291 BaseBdev4' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.291 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 [2024-11-26 21:17:56.434445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.291 [2024-11-26 21:17:56.434521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.291 [2024-11-26 21:17:56.434592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.550 "name": "Existed_Raid", 00:10:38.550 "uuid": "7001ad04-ba39-4c5f-8469-95f7925fcbd3", 00:10:38.550 "strip_size_kb": 64, 00:10:38.550 "state": "offline", 00:10:38.550 "raid_level": "concat", 00:10:38.550 "superblock": false, 00:10:38.550 "num_base_bdevs": 4, 00:10:38.550 "num_base_bdevs_discovered": 3, 00:10:38.550 "num_base_bdevs_operational": 3, 00:10:38.550 "base_bdevs_list": [ 00:10:38.550 { 00:10:38.550 "name": null, 00:10:38.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.550 "is_configured": false, 00:10:38.550 "data_offset": 0, 00:10:38.550 "data_size": 65536 00:10:38.550 }, 00:10:38.550 { 00:10:38.550 "name": "BaseBdev2", 00:10:38.550 "uuid": "7fe2cbd5-3930-4a9d-8505-60977f78192e", 00:10:38.550 "is_configured": true, 00:10:38.550 "data_offset": 0, 00:10:38.550 "data_size": 65536 00:10:38.550 }, 00:10:38.550 { 00:10:38.550 "name": "BaseBdev3", 00:10:38.550 "uuid": "92e90838-45a6-4240-9e3b-e4b5bc4dc352", 00:10:38.550 "is_configured": true, 00:10:38.550 "data_offset": 0, 00:10:38.550 "data_size": 65536 00:10:38.550 }, 00:10:38.550 { 00:10:38.550 "name": "BaseBdev4", 00:10:38.550 "uuid": "46a6d95d-3169-4597-9b34-02608c35e0f9", 00:10:38.550 "is_configured": true, 00:10:38.550 "data_offset": 0, 00:10:38.550 "data_size": 65536 00:10:38.550 } 00:10:38.550 ] 00:10:38.550 }' 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.550 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.820 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.108 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.108 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.108 21:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:39.108 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.108 21:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.108 [2024-11-26 21:17:56.986095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.108 [2024-11-26 21:17:57.139562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.108 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 [2024-11-26 21:17:57.284487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:39.368 [2024-11-26 21:17:57.284600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 BaseBdev2 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.368 [ 00:10:39.368 { 00:10:39.368 "name": "BaseBdev2", 00:10:39.368 "aliases": [ 00:10:39.368 "08ac591e-1819-47e5-9c62-e0d05223b464" 00:10:39.368 ], 00:10:39.368 "product_name": "Malloc disk", 00:10:39.368 "block_size": 512, 00:10:39.368 "num_blocks": 65536, 00:10:39.368 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:39.368 "assigned_rate_limits": { 00:10:39.368 "rw_ios_per_sec": 0, 00:10:39.368 "rw_mbytes_per_sec": 0, 00:10:39.368 "r_mbytes_per_sec": 0, 00:10:39.368 "w_mbytes_per_sec": 0 00:10:39.368 }, 00:10:39.368 "claimed": false, 00:10:39.368 "zoned": false, 00:10:39.368 "supported_io_types": { 00:10:39.368 "read": true, 00:10:39.368 "write": true, 00:10:39.368 "unmap": true, 00:10:39.368 "flush": true, 00:10:39.368 "reset": true, 00:10:39.368 "nvme_admin": false, 00:10:39.368 "nvme_io": false, 00:10:39.368 "nvme_io_md": false, 00:10:39.368 "write_zeroes": true, 00:10:39.368 "zcopy": true, 00:10:39.368 "get_zone_info": false, 00:10:39.368 "zone_management": false, 00:10:39.368 "zone_append": false, 00:10:39.368 "compare": false, 00:10:39.368 "compare_and_write": false, 00:10:39.368 "abort": true, 00:10:39.368 "seek_hole": false, 00:10:39.368 "seek_data": false, 00:10:39.368 "copy": true, 00:10:39.368 "nvme_iov_md": false 00:10:39.368 }, 00:10:39.368 "memory_domains": [ 00:10:39.368 { 00:10:39.368 "dma_device_id": "system", 00:10:39.368 "dma_device_type": 1 00:10:39.368 }, 00:10:39.368 { 00:10:39.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.368 "dma_device_type": 2 00:10:39.368 } 00:10:39.368 ], 00:10:39.368 "driver_specific": {} 00:10:39.368 } 00:10:39.368 ] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.368 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 BaseBdev3 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 [ 00:10:39.629 { 00:10:39.629 "name": "BaseBdev3", 00:10:39.629 "aliases": [ 00:10:39.629 "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f" 00:10:39.629 ], 00:10:39.629 "product_name": "Malloc disk", 00:10:39.629 "block_size": 512, 00:10:39.629 "num_blocks": 65536, 00:10:39.629 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:39.629 "assigned_rate_limits": { 00:10:39.629 "rw_ios_per_sec": 0, 00:10:39.629 "rw_mbytes_per_sec": 0, 00:10:39.629 "r_mbytes_per_sec": 0, 00:10:39.629 "w_mbytes_per_sec": 0 00:10:39.629 }, 00:10:39.629 "claimed": false, 00:10:39.629 "zoned": false, 00:10:39.629 "supported_io_types": { 00:10:39.629 "read": true, 00:10:39.629 "write": true, 00:10:39.629 "unmap": true, 00:10:39.629 "flush": true, 00:10:39.629 "reset": true, 00:10:39.629 "nvme_admin": false, 00:10:39.629 "nvme_io": false, 00:10:39.629 "nvme_io_md": false, 00:10:39.629 "write_zeroes": true, 00:10:39.629 "zcopy": true, 00:10:39.629 "get_zone_info": false, 00:10:39.629 "zone_management": false, 00:10:39.629 "zone_append": false, 00:10:39.629 "compare": false, 00:10:39.629 "compare_and_write": false, 00:10:39.629 "abort": true, 00:10:39.629 "seek_hole": false, 00:10:39.629 "seek_data": false, 00:10:39.629 "copy": true, 00:10:39.629 "nvme_iov_md": false 00:10:39.629 }, 00:10:39.629 "memory_domains": [ 00:10:39.629 { 00:10:39.629 "dma_device_id": "system", 00:10:39.629 "dma_device_type": 1 00:10:39.629 }, 00:10:39.629 { 00:10:39.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.629 "dma_device_type": 2 00:10:39.629 } 00:10:39.629 ], 00:10:39.629 "driver_specific": {} 00:10:39.629 } 00:10:39.629 ] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 BaseBdev4 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.629 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.629 [ 00:10:39.629 { 00:10:39.629 "name": "BaseBdev4", 00:10:39.629 "aliases": [ 00:10:39.629 "70d0def4-912c-43ea-ac9f-3d09aa65db1d" 00:10:39.629 ], 00:10:39.629 "product_name": "Malloc disk", 00:10:39.629 "block_size": 512, 00:10:39.629 "num_blocks": 65536, 00:10:39.629 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:39.629 "assigned_rate_limits": { 00:10:39.629 "rw_ios_per_sec": 0, 00:10:39.629 "rw_mbytes_per_sec": 0, 00:10:39.629 "r_mbytes_per_sec": 0, 00:10:39.629 "w_mbytes_per_sec": 0 00:10:39.629 }, 00:10:39.629 "claimed": false, 00:10:39.629 "zoned": false, 00:10:39.629 "supported_io_types": { 00:10:39.629 "read": true, 00:10:39.629 "write": true, 00:10:39.629 "unmap": true, 00:10:39.629 "flush": true, 00:10:39.629 "reset": true, 00:10:39.629 "nvme_admin": false, 00:10:39.629 "nvme_io": false, 00:10:39.629 "nvme_io_md": false, 00:10:39.629 "write_zeroes": true, 00:10:39.629 "zcopy": true, 00:10:39.629 "get_zone_info": false, 00:10:39.629 "zone_management": false, 00:10:39.629 "zone_append": false, 00:10:39.629 "compare": false, 00:10:39.629 "compare_and_write": false, 00:10:39.629 "abort": true, 00:10:39.629 "seek_hole": false, 00:10:39.629 "seek_data": false, 00:10:39.629 "copy": true, 00:10:39.629 "nvme_iov_md": false 00:10:39.629 }, 00:10:39.629 "memory_domains": [ 00:10:39.629 { 00:10:39.629 "dma_device_id": "system", 00:10:39.629 "dma_device_type": 1 00:10:39.629 }, 00:10:39.630 { 00:10:39.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.630 "dma_device_type": 2 00:10:39.630 } 00:10:39.630 ], 00:10:39.630 "driver_specific": {} 00:10:39.630 } 00:10:39.630 ] 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.630 [2024-11-26 21:17:57.667719] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.630 [2024-11-26 21:17:57.667828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.630 [2024-11-26 21:17:57.667879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.630 [2024-11-26 21:17:57.669672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.630 [2024-11-26 21:17:57.669761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.630 "name": "Existed_Raid", 00:10:39.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.630 "strip_size_kb": 64, 00:10:39.630 "state": "configuring", 00:10:39.630 "raid_level": "concat", 00:10:39.630 "superblock": false, 00:10:39.630 "num_base_bdevs": 4, 00:10:39.630 "num_base_bdevs_discovered": 3, 00:10:39.630 "num_base_bdevs_operational": 4, 00:10:39.630 "base_bdevs_list": [ 00:10:39.630 { 00:10:39.630 "name": "BaseBdev1", 00:10:39.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.630 "is_configured": false, 00:10:39.630 "data_offset": 0, 00:10:39.630 "data_size": 0 00:10:39.630 }, 00:10:39.630 { 00:10:39.630 "name": "BaseBdev2", 00:10:39.630 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:39.630 "is_configured": true, 00:10:39.630 "data_offset": 0, 00:10:39.630 "data_size": 65536 00:10:39.630 }, 00:10:39.630 { 00:10:39.630 "name": "BaseBdev3", 00:10:39.630 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:39.630 "is_configured": true, 00:10:39.630 "data_offset": 0, 00:10:39.630 "data_size": 65536 00:10:39.630 }, 00:10:39.630 { 00:10:39.630 "name": "BaseBdev4", 00:10:39.630 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:39.630 "is_configured": true, 00:10:39.630 "data_offset": 0, 00:10:39.630 "data_size": 65536 00:10:39.630 } 00:10:39.630 ] 00:10:39.630 }' 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.630 21:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.199 [2024-11-26 21:17:58.134917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.199 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.199 "name": "Existed_Raid", 00:10:40.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.199 "strip_size_kb": 64, 00:10:40.199 "state": "configuring", 00:10:40.199 "raid_level": "concat", 00:10:40.199 "superblock": false, 00:10:40.199 "num_base_bdevs": 4, 00:10:40.199 "num_base_bdevs_discovered": 2, 00:10:40.199 "num_base_bdevs_operational": 4, 00:10:40.199 "base_bdevs_list": [ 00:10:40.199 { 00:10:40.199 "name": "BaseBdev1", 00:10:40.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.199 "is_configured": false, 00:10:40.199 "data_offset": 0, 00:10:40.199 "data_size": 0 00:10:40.199 }, 00:10:40.199 { 00:10:40.199 "name": null, 00:10:40.199 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:40.199 "is_configured": false, 00:10:40.199 "data_offset": 0, 00:10:40.200 "data_size": 65536 00:10:40.200 }, 00:10:40.200 { 00:10:40.200 "name": "BaseBdev3", 00:10:40.200 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:40.200 "is_configured": true, 00:10:40.200 "data_offset": 0, 00:10:40.200 "data_size": 65536 00:10:40.200 }, 00:10:40.200 { 00:10:40.200 "name": "BaseBdev4", 00:10:40.200 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:40.200 "is_configured": true, 00:10:40.200 "data_offset": 0, 00:10:40.200 "data_size": 65536 00:10:40.200 } 00:10:40.200 ] 00:10:40.200 }' 00:10:40.200 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.200 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.459 [2024-11-26 21:17:58.601979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.459 BaseBdev1 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.459 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.719 [ 00:10:40.719 { 00:10:40.719 "name": "BaseBdev1", 00:10:40.719 "aliases": [ 00:10:40.719 "84755dac-1c1d-4df2-8021-f2fec98f70d2" 00:10:40.719 ], 00:10:40.719 "product_name": "Malloc disk", 00:10:40.719 "block_size": 512, 00:10:40.719 "num_blocks": 65536, 00:10:40.719 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:40.719 "assigned_rate_limits": { 00:10:40.719 "rw_ios_per_sec": 0, 00:10:40.719 "rw_mbytes_per_sec": 0, 00:10:40.719 "r_mbytes_per_sec": 0, 00:10:40.719 "w_mbytes_per_sec": 0 00:10:40.719 }, 00:10:40.719 "claimed": true, 00:10:40.719 "claim_type": "exclusive_write", 00:10:40.719 "zoned": false, 00:10:40.719 "supported_io_types": { 00:10:40.719 "read": true, 00:10:40.719 "write": true, 00:10:40.719 "unmap": true, 00:10:40.719 "flush": true, 00:10:40.719 "reset": true, 00:10:40.719 "nvme_admin": false, 00:10:40.719 "nvme_io": false, 00:10:40.719 "nvme_io_md": false, 00:10:40.719 "write_zeroes": true, 00:10:40.719 "zcopy": true, 00:10:40.719 "get_zone_info": false, 00:10:40.719 "zone_management": false, 00:10:40.719 "zone_append": false, 00:10:40.719 "compare": false, 00:10:40.719 "compare_and_write": false, 00:10:40.719 "abort": true, 00:10:40.719 "seek_hole": false, 00:10:40.719 "seek_data": false, 00:10:40.719 "copy": true, 00:10:40.719 "nvme_iov_md": false 00:10:40.719 }, 00:10:40.719 "memory_domains": [ 00:10:40.719 { 00:10:40.719 "dma_device_id": "system", 00:10:40.719 "dma_device_type": 1 00:10:40.719 }, 00:10:40.719 { 00:10:40.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.719 "dma_device_type": 2 00:10:40.719 } 00:10:40.719 ], 00:10:40.719 "driver_specific": {} 00:10:40.719 } 00:10:40.719 ] 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.719 "name": "Existed_Raid", 00:10:40.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.719 "strip_size_kb": 64, 00:10:40.719 "state": "configuring", 00:10:40.719 "raid_level": "concat", 00:10:40.719 "superblock": false, 00:10:40.719 "num_base_bdevs": 4, 00:10:40.719 "num_base_bdevs_discovered": 3, 00:10:40.719 "num_base_bdevs_operational": 4, 00:10:40.719 "base_bdevs_list": [ 00:10:40.719 { 00:10:40.719 "name": "BaseBdev1", 00:10:40.719 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:40.719 "is_configured": true, 00:10:40.719 "data_offset": 0, 00:10:40.719 "data_size": 65536 00:10:40.719 }, 00:10:40.719 { 00:10:40.719 "name": null, 00:10:40.719 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:40.719 "is_configured": false, 00:10:40.719 "data_offset": 0, 00:10:40.719 "data_size": 65536 00:10:40.719 }, 00:10:40.719 { 00:10:40.719 "name": "BaseBdev3", 00:10:40.719 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:40.719 "is_configured": true, 00:10:40.719 "data_offset": 0, 00:10:40.719 "data_size": 65536 00:10:40.719 }, 00:10:40.719 { 00:10:40.719 "name": "BaseBdev4", 00:10:40.719 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:40.719 "is_configured": true, 00:10:40.719 "data_offset": 0, 00:10:40.719 "data_size": 65536 00:10:40.719 } 00:10:40.719 ] 00:10:40.719 }' 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.719 21:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.979 [2024-11-26 21:17:59.073226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.979 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.248 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.248 "name": "Existed_Raid", 00:10:41.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.248 "strip_size_kb": 64, 00:10:41.248 "state": "configuring", 00:10:41.248 "raid_level": "concat", 00:10:41.248 "superblock": false, 00:10:41.248 "num_base_bdevs": 4, 00:10:41.248 "num_base_bdevs_discovered": 2, 00:10:41.248 "num_base_bdevs_operational": 4, 00:10:41.248 "base_bdevs_list": [ 00:10:41.248 { 00:10:41.248 "name": "BaseBdev1", 00:10:41.248 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:41.248 "is_configured": true, 00:10:41.248 "data_offset": 0, 00:10:41.248 "data_size": 65536 00:10:41.248 }, 00:10:41.248 { 00:10:41.248 "name": null, 00:10:41.248 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:41.248 "is_configured": false, 00:10:41.248 "data_offset": 0, 00:10:41.248 "data_size": 65536 00:10:41.248 }, 00:10:41.248 { 00:10:41.248 "name": null, 00:10:41.248 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:41.248 "is_configured": false, 00:10:41.248 "data_offset": 0, 00:10:41.248 "data_size": 65536 00:10:41.248 }, 00:10:41.248 { 00:10:41.248 "name": "BaseBdev4", 00:10:41.248 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:41.248 "is_configured": true, 00:10:41.248 "data_offset": 0, 00:10:41.248 "data_size": 65536 00:10:41.248 } 00:10:41.248 ] 00:10:41.248 }' 00:10:41.248 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.248 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.508 [2024-11-26 21:17:59.588423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.508 "name": "Existed_Raid", 00:10:41.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.508 "strip_size_kb": 64, 00:10:41.508 "state": "configuring", 00:10:41.508 "raid_level": "concat", 00:10:41.508 "superblock": false, 00:10:41.508 "num_base_bdevs": 4, 00:10:41.508 "num_base_bdevs_discovered": 3, 00:10:41.508 "num_base_bdevs_operational": 4, 00:10:41.508 "base_bdevs_list": [ 00:10:41.508 { 00:10:41.508 "name": "BaseBdev1", 00:10:41.508 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:41.508 "is_configured": true, 00:10:41.508 "data_offset": 0, 00:10:41.508 "data_size": 65536 00:10:41.508 }, 00:10:41.508 { 00:10:41.508 "name": null, 00:10:41.508 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:41.508 "is_configured": false, 00:10:41.508 "data_offset": 0, 00:10:41.508 "data_size": 65536 00:10:41.508 }, 00:10:41.508 { 00:10:41.508 "name": "BaseBdev3", 00:10:41.508 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:41.508 "is_configured": true, 00:10:41.508 "data_offset": 0, 00:10:41.508 "data_size": 65536 00:10:41.508 }, 00:10:41.508 { 00:10:41.508 "name": "BaseBdev4", 00:10:41.508 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:41.508 "is_configured": true, 00:10:41.508 "data_offset": 0, 00:10:41.508 "data_size": 65536 00:10:41.508 } 00:10:41.508 ] 00:10:41.508 }' 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.508 21:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.078 [2024-11-26 21:18:00.095627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.078 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.079 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.079 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.079 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.079 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.339 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.339 "name": "Existed_Raid", 00:10:42.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.339 "strip_size_kb": 64, 00:10:42.339 "state": "configuring", 00:10:42.339 "raid_level": "concat", 00:10:42.339 "superblock": false, 00:10:42.339 "num_base_bdevs": 4, 00:10:42.339 "num_base_bdevs_discovered": 2, 00:10:42.339 "num_base_bdevs_operational": 4, 00:10:42.339 "base_bdevs_list": [ 00:10:42.339 { 00:10:42.339 "name": null, 00:10:42.339 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:42.339 "is_configured": false, 00:10:42.339 "data_offset": 0, 00:10:42.339 "data_size": 65536 00:10:42.339 }, 00:10:42.339 { 00:10:42.339 "name": null, 00:10:42.339 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:42.339 "is_configured": false, 00:10:42.339 "data_offset": 0, 00:10:42.339 "data_size": 65536 00:10:42.339 }, 00:10:42.339 { 00:10:42.339 "name": "BaseBdev3", 00:10:42.339 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:42.339 "is_configured": true, 00:10:42.339 "data_offset": 0, 00:10:42.339 "data_size": 65536 00:10:42.339 }, 00:10:42.339 { 00:10:42.339 "name": "BaseBdev4", 00:10:42.339 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:42.339 "is_configured": true, 00:10:42.339 "data_offset": 0, 00:10:42.339 "data_size": 65536 00:10:42.339 } 00:10:42.339 ] 00:10:42.339 }' 00:10:42.339 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.339 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.599 [2024-11-26 21:18:00.597734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.599 "name": "Existed_Raid", 00:10:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.599 "strip_size_kb": 64, 00:10:42.599 "state": "configuring", 00:10:42.599 "raid_level": "concat", 00:10:42.599 "superblock": false, 00:10:42.599 "num_base_bdevs": 4, 00:10:42.599 "num_base_bdevs_discovered": 3, 00:10:42.599 "num_base_bdevs_operational": 4, 00:10:42.599 "base_bdevs_list": [ 00:10:42.599 { 00:10:42.599 "name": null, 00:10:42.599 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:42.599 "is_configured": false, 00:10:42.599 "data_offset": 0, 00:10:42.599 "data_size": 65536 00:10:42.599 }, 00:10:42.599 { 00:10:42.599 "name": "BaseBdev2", 00:10:42.599 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:42.599 "is_configured": true, 00:10:42.599 "data_offset": 0, 00:10:42.599 "data_size": 65536 00:10:42.599 }, 00:10:42.599 { 00:10:42.599 "name": "BaseBdev3", 00:10:42.599 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:42.599 "is_configured": true, 00:10:42.599 "data_offset": 0, 00:10:42.599 "data_size": 65536 00:10:42.599 }, 00:10:42.599 { 00:10:42.599 "name": "BaseBdev4", 00:10:42.599 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:42.599 "is_configured": true, 00:10:42.599 "data_offset": 0, 00:10:42.599 "data_size": 65536 00:10:42.599 } 00:10:42.599 ] 00:10:42.599 }' 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.599 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.859 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.859 21:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.859 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.859 21:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.859 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.119 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:43.119 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.119 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.119 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.119 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84755dac-1c1d-4df2-8021-f2fec98f70d2 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.120 [2024-11-26 21:18:01.121456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.120 [2024-11-26 21:18:01.121601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:43.120 [2024-11-26 21:18:01.121613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:43.120 [2024-11-26 21:18:01.121879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:43.120 [2024-11-26 21:18:01.122047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:43.120 [2024-11-26 21:18:01.122060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:43.120 [2024-11-26 21:18:01.122329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.120 NewBaseBdev 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.120 [ 00:10:43.120 { 00:10:43.120 "name": "NewBaseBdev", 00:10:43.120 "aliases": [ 00:10:43.120 "84755dac-1c1d-4df2-8021-f2fec98f70d2" 00:10:43.120 ], 00:10:43.120 "product_name": "Malloc disk", 00:10:43.120 "block_size": 512, 00:10:43.120 "num_blocks": 65536, 00:10:43.120 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:43.120 "assigned_rate_limits": { 00:10:43.120 "rw_ios_per_sec": 0, 00:10:43.120 "rw_mbytes_per_sec": 0, 00:10:43.120 "r_mbytes_per_sec": 0, 00:10:43.120 "w_mbytes_per_sec": 0 00:10:43.120 }, 00:10:43.120 "claimed": true, 00:10:43.120 "claim_type": "exclusive_write", 00:10:43.120 "zoned": false, 00:10:43.120 "supported_io_types": { 00:10:43.120 "read": true, 00:10:43.120 "write": true, 00:10:43.120 "unmap": true, 00:10:43.120 "flush": true, 00:10:43.120 "reset": true, 00:10:43.120 "nvme_admin": false, 00:10:43.120 "nvme_io": false, 00:10:43.120 "nvme_io_md": false, 00:10:43.120 "write_zeroes": true, 00:10:43.120 "zcopy": true, 00:10:43.120 "get_zone_info": false, 00:10:43.120 "zone_management": false, 00:10:43.120 "zone_append": false, 00:10:43.120 "compare": false, 00:10:43.120 "compare_and_write": false, 00:10:43.120 "abort": true, 00:10:43.120 "seek_hole": false, 00:10:43.120 "seek_data": false, 00:10:43.120 "copy": true, 00:10:43.120 "nvme_iov_md": false 00:10:43.120 }, 00:10:43.120 "memory_domains": [ 00:10:43.120 { 00:10:43.120 "dma_device_id": "system", 00:10:43.120 "dma_device_type": 1 00:10:43.120 }, 00:10:43.120 { 00:10:43.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.120 "dma_device_type": 2 00:10:43.120 } 00:10:43.120 ], 00:10:43.120 "driver_specific": {} 00:10:43.120 } 00:10:43.120 ] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.120 "name": "Existed_Raid", 00:10:43.120 "uuid": "d444c59b-999d-4c66-99ad-c719e859773d", 00:10:43.120 "strip_size_kb": 64, 00:10:43.120 "state": "online", 00:10:43.120 "raid_level": "concat", 00:10:43.120 "superblock": false, 00:10:43.120 "num_base_bdevs": 4, 00:10:43.120 "num_base_bdevs_discovered": 4, 00:10:43.120 "num_base_bdevs_operational": 4, 00:10:43.120 "base_bdevs_list": [ 00:10:43.120 { 00:10:43.120 "name": "NewBaseBdev", 00:10:43.120 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:43.120 "is_configured": true, 00:10:43.120 "data_offset": 0, 00:10:43.120 "data_size": 65536 00:10:43.120 }, 00:10:43.120 { 00:10:43.120 "name": "BaseBdev2", 00:10:43.120 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:43.120 "is_configured": true, 00:10:43.120 "data_offset": 0, 00:10:43.120 "data_size": 65536 00:10:43.120 }, 00:10:43.120 { 00:10:43.120 "name": "BaseBdev3", 00:10:43.120 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:43.120 "is_configured": true, 00:10:43.120 "data_offset": 0, 00:10:43.120 "data_size": 65536 00:10:43.120 }, 00:10:43.120 { 00:10:43.120 "name": "BaseBdev4", 00:10:43.120 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:43.120 "is_configured": true, 00:10:43.120 "data_offset": 0, 00:10:43.120 "data_size": 65536 00:10:43.120 } 00:10:43.120 ] 00:10:43.120 }' 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.120 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.690 [2024-11-26 21:18:01.557165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.690 "name": "Existed_Raid", 00:10:43.690 "aliases": [ 00:10:43.690 "d444c59b-999d-4c66-99ad-c719e859773d" 00:10:43.690 ], 00:10:43.690 "product_name": "Raid Volume", 00:10:43.690 "block_size": 512, 00:10:43.690 "num_blocks": 262144, 00:10:43.690 "uuid": "d444c59b-999d-4c66-99ad-c719e859773d", 00:10:43.690 "assigned_rate_limits": { 00:10:43.690 "rw_ios_per_sec": 0, 00:10:43.690 "rw_mbytes_per_sec": 0, 00:10:43.690 "r_mbytes_per_sec": 0, 00:10:43.690 "w_mbytes_per_sec": 0 00:10:43.690 }, 00:10:43.690 "claimed": false, 00:10:43.690 "zoned": false, 00:10:43.690 "supported_io_types": { 00:10:43.690 "read": true, 00:10:43.690 "write": true, 00:10:43.690 "unmap": true, 00:10:43.690 "flush": true, 00:10:43.690 "reset": true, 00:10:43.690 "nvme_admin": false, 00:10:43.690 "nvme_io": false, 00:10:43.690 "nvme_io_md": false, 00:10:43.690 "write_zeroes": true, 00:10:43.690 "zcopy": false, 00:10:43.690 "get_zone_info": false, 00:10:43.690 "zone_management": false, 00:10:43.690 "zone_append": false, 00:10:43.690 "compare": false, 00:10:43.690 "compare_and_write": false, 00:10:43.690 "abort": false, 00:10:43.690 "seek_hole": false, 00:10:43.690 "seek_data": false, 00:10:43.690 "copy": false, 00:10:43.690 "nvme_iov_md": false 00:10:43.690 }, 00:10:43.690 "memory_domains": [ 00:10:43.690 { 00:10:43.690 "dma_device_id": "system", 00:10:43.690 "dma_device_type": 1 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.690 "dma_device_type": 2 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "system", 00:10:43.690 "dma_device_type": 1 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.690 "dma_device_type": 2 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "system", 00:10:43.690 "dma_device_type": 1 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.690 "dma_device_type": 2 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "system", 00:10:43.690 "dma_device_type": 1 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.690 "dma_device_type": 2 00:10:43.690 } 00:10:43.690 ], 00:10:43.690 "driver_specific": { 00:10:43.690 "raid": { 00:10:43.690 "uuid": "d444c59b-999d-4c66-99ad-c719e859773d", 00:10:43.690 "strip_size_kb": 64, 00:10:43.690 "state": "online", 00:10:43.690 "raid_level": "concat", 00:10:43.690 "superblock": false, 00:10:43.690 "num_base_bdevs": 4, 00:10:43.690 "num_base_bdevs_discovered": 4, 00:10:43.690 "num_base_bdevs_operational": 4, 00:10:43.690 "base_bdevs_list": [ 00:10:43.690 { 00:10:43.690 "name": "NewBaseBdev", 00:10:43.690 "uuid": "84755dac-1c1d-4df2-8021-f2fec98f70d2", 00:10:43.690 "is_configured": true, 00:10:43.690 "data_offset": 0, 00:10:43.690 "data_size": 65536 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "name": "BaseBdev2", 00:10:43.690 "uuid": "08ac591e-1819-47e5-9c62-e0d05223b464", 00:10:43.690 "is_configured": true, 00:10:43.690 "data_offset": 0, 00:10:43.690 "data_size": 65536 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "name": "BaseBdev3", 00:10:43.690 "uuid": "dab7fc67-f985-4bb6-bfa5-2291d7b96a2f", 00:10:43.690 "is_configured": true, 00:10:43.690 "data_offset": 0, 00:10:43.690 "data_size": 65536 00:10:43.690 }, 00:10:43.690 { 00:10:43.690 "name": "BaseBdev4", 00:10:43.690 "uuid": "70d0def4-912c-43ea-ac9f-3d09aa65db1d", 00:10:43.690 "is_configured": true, 00:10:43.690 "data_offset": 0, 00:10:43.690 "data_size": 65536 00:10:43.690 } 00:10:43.690 ] 00:10:43.690 } 00:10:43.690 } 00:10:43.690 }' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:43.690 BaseBdev2 00:10:43.690 BaseBdev3 00:10:43.690 BaseBdev4' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.690 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.691 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.950 [2024-11-26 21:18:01.848300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.950 [2024-11-26 21:18:01.848384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.950 [2024-11-26 21:18:01.848513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.950 [2024-11-26 21:18:01.848612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.951 [2024-11-26 21:18:01.848659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71089 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71089 ']' 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71089 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71089 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.951 killing process with pid 71089 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71089' 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71089 00:10:43.951 [2024-11-26 21:18:01.897119] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.951 21:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71089 00:10:44.210 [2024-11-26 21:18:02.286810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.591 21:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.591 00:10:45.591 real 0m11.236s 00:10:45.591 user 0m17.698s 00:10:45.591 sys 0m2.101s 00:10:45.591 ************************************ 00:10:45.591 END TEST raid_state_function_test 00:10:45.591 ************************************ 00:10:45.591 21:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.591 21:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.592 21:18:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:45.592 21:18:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.592 21:18:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.592 21:18:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.592 ************************************ 00:10:45.592 START TEST raid_state_function_test_sb 00:10:45.592 ************************************ 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:45.592 Process raid pid: 71760 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71760 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71760' 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71760 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71760 ']' 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.592 21:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.592 [2024-11-26 21:18:03.558982] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:45.592 [2024-11-26 21:18:03.559096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.592 [2024-11-26 21:18:03.732715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.852 [2024-11-26 21:18:03.842535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.112 [2024-11-26 21:18:04.046160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.112 [2024-11-26 21:18:04.046204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.371 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.371 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:46.371 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.371 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.371 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.371 [2024-11-26 21:18:04.381396] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.371 [2024-11-26 21:18:04.381517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.371 [2024-11-26 21:18:04.381532] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.372 [2024-11-26 21:18:04.381543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.372 [2024-11-26 21:18:04.381554] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.372 [2024-11-26 21:18:04.381563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.372 [2024-11-26 21:18:04.381569] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.372 [2024-11-26 21:18:04.381577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.372 "name": "Existed_Raid", 00:10:46.372 "uuid": "93aa8ee4-4524-466e-83bb-22f5bcbcffef", 00:10:46.372 "strip_size_kb": 64, 00:10:46.372 "state": "configuring", 00:10:46.372 "raid_level": "concat", 00:10:46.372 "superblock": true, 00:10:46.372 "num_base_bdevs": 4, 00:10:46.372 "num_base_bdevs_discovered": 0, 00:10:46.372 "num_base_bdevs_operational": 4, 00:10:46.372 "base_bdevs_list": [ 00:10:46.372 { 00:10:46.372 "name": "BaseBdev1", 00:10:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.372 "is_configured": false, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 0 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "name": "BaseBdev2", 00:10:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.372 "is_configured": false, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 0 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "name": "BaseBdev3", 00:10:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.372 "is_configured": false, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 0 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "name": "BaseBdev4", 00:10:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.372 "is_configured": false, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 0 00:10:46.372 } 00:10:46.372 ] 00:10:46.372 }' 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.372 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.942 [2024-11-26 21:18:04.816574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.942 [2024-11-26 21:18:04.816613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.942 [2024-11-26 21:18:04.828577] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.942 [2024-11-26 21:18:04.828625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.942 [2024-11-26 21:18:04.828634] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.942 [2024-11-26 21:18:04.828643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.942 [2024-11-26 21:18:04.828649] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.942 [2024-11-26 21:18:04.828657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.942 [2024-11-26 21:18:04.828664] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.942 [2024-11-26 21:18:04.828672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.942 [2024-11-26 21:18:04.875633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.942 BaseBdev1 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.942 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.942 [ 00:10:46.942 { 00:10:46.943 "name": "BaseBdev1", 00:10:46.943 "aliases": [ 00:10:46.943 "db438002-c496-464a-bd75-4fd14c3b9e49" 00:10:46.943 ], 00:10:46.943 "product_name": "Malloc disk", 00:10:46.943 "block_size": 512, 00:10:46.943 "num_blocks": 65536, 00:10:46.943 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:46.943 "assigned_rate_limits": { 00:10:46.943 "rw_ios_per_sec": 0, 00:10:46.943 "rw_mbytes_per_sec": 0, 00:10:46.943 "r_mbytes_per_sec": 0, 00:10:46.943 "w_mbytes_per_sec": 0 00:10:46.943 }, 00:10:46.943 "claimed": true, 00:10:46.943 "claim_type": "exclusive_write", 00:10:46.943 "zoned": false, 00:10:46.943 "supported_io_types": { 00:10:46.943 "read": true, 00:10:46.943 "write": true, 00:10:46.943 "unmap": true, 00:10:46.943 "flush": true, 00:10:46.943 "reset": true, 00:10:46.943 "nvme_admin": false, 00:10:46.943 "nvme_io": false, 00:10:46.943 "nvme_io_md": false, 00:10:46.943 "write_zeroes": true, 00:10:46.943 "zcopy": true, 00:10:46.943 "get_zone_info": false, 00:10:46.943 "zone_management": false, 00:10:46.943 "zone_append": false, 00:10:46.943 "compare": false, 00:10:46.943 "compare_and_write": false, 00:10:46.943 "abort": true, 00:10:46.943 "seek_hole": false, 00:10:46.943 "seek_data": false, 00:10:46.943 "copy": true, 00:10:46.943 "nvme_iov_md": false 00:10:46.943 }, 00:10:46.943 "memory_domains": [ 00:10:46.943 { 00:10:46.943 "dma_device_id": "system", 00:10:46.943 "dma_device_type": 1 00:10:46.943 }, 00:10:46.943 { 00:10:46.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.943 "dma_device_type": 2 00:10:46.943 } 00:10:46.943 ], 00:10:46.943 "driver_specific": {} 00:10:46.943 } 00:10:46.943 ] 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.943 "name": "Existed_Raid", 00:10:46.943 "uuid": "0f87bd18-5d70-4108-886e-fc5e67b59a5c", 00:10:46.943 "strip_size_kb": 64, 00:10:46.943 "state": "configuring", 00:10:46.943 "raid_level": "concat", 00:10:46.943 "superblock": true, 00:10:46.943 "num_base_bdevs": 4, 00:10:46.943 "num_base_bdevs_discovered": 1, 00:10:46.943 "num_base_bdevs_operational": 4, 00:10:46.943 "base_bdevs_list": [ 00:10:46.943 { 00:10:46.943 "name": "BaseBdev1", 00:10:46.943 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:46.943 "is_configured": true, 00:10:46.943 "data_offset": 2048, 00:10:46.943 "data_size": 63488 00:10:46.943 }, 00:10:46.943 { 00:10:46.943 "name": "BaseBdev2", 00:10:46.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.943 "is_configured": false, 00:10:46.943 "data_offset": 0, 00:10:46.943 "data_size": 0 00:10:46.943 }, 00:10:46.943 { 00:10:46.943 "name": "BaseBdev3", 00:10:46.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.943 "is_configured": false, 00:10:46.943 "data_offset": 0, 00:10:46.943 "data_size": 0 00:10:46.943 }, 00:10:46.943 { 00:10:46.943 "name": "BaseBdev4", 00:10:46.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.943 "is_configured": false, 00:10:46.943 "data_offset": 0, 00:10:46.943 "data_size": 0 00:10:46.943 } 00:10:46.943 ] 00:10:46.943 }' 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.943 21:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.203 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.203 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.203 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.203 [2024-11-26 21:18:05.354893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.203 [2024-11-26 21:18:05.354949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.464 [2024-11-26 21:18:05.366912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.464 [2024-11-26 21:18:05.368714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.464 [2024-11-26 21:18:05.368797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.464 [2024-11-26 21:18:05.368811] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.464 [2024-11-26 21:18:05.368822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.464 [2024-11-26 21:18:05.368828] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.464 [2024-11-26 21:18:05.368836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.464 "name": "Existed_Raid", 00:10:47.464 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:47.464 "strip_size_kb": 64, 00:10:47.464 "state": "configuring", 00:10:47.464 "raid_level": "concat", 00:10:47.464 "superblock": true, 00:10:47.464 "num_base_bdevs": 4, 00:10:47.464 "num_base_bdevs_discovered": 1, 00:10:47.464 "num_base_bdevs_operational": 4, 00:10:47.464 "base_bdevs_list": [ 00:10:47.464 { 00:10:47.464 "name": "BaseBdev1", 00:10:47.464 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:47.464 "is_configured": true, 00:10:47.464 "data_offset": 2048, 00:10:47.464 "data_size": 63488 00:10:47.464 }, 00:10:47.464 { 00:10:47.464 "name": "BaseBdev2", 00:10:47.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.464 "is_configured": false, 00:10:47.464 "data_offset": 0, 00:10:47.464 "data_size": 0 00:10:47.464 }, 00:10:47.464 { 00:10:47.464 "name": "BaseBdev3", 00:10:47.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.464 "is_configured": false, 00:10:47.464 "data_offset": 0, 00:10:47.464 "data_size": 0 00:10:47.464 }, 00:10:47.464 { 00:10:47.464 "name": "BaseBdev4", 00:10:47.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.464 "is_configured": false, 00:10:47.464 "data_offset": 0, 00:10:47.464 "data_size": 0 00:10:47.464 } 00:10:47.464 ] 00:10:47.464 }' 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.464 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.767 [2024-11-26 21:18:05.833816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.767 BaseBdev2 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.767 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.767 [ 00:10:47.767 { 00:10:47.767 "name": "BaseBdev2", 00:10:47.767 "aliases": [ 00:10:47.767 "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2" 00:10:47.767 ], 00:10:47.767 "product_name": "Malloc disk", 00:10:47.767 "block_size": 512, 00:10:47.767 "num_blocks": 65536, 00:10:47.767 "uuid": "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2", 00:10:47.767 "assigned_rate_limits": { 00:10:47.767 "rw_ios_per_sec": 0, 00:10:47.767 "rw_mbytes_per_sec": 0, 00:10:47.767 "r_mbytes_per_sec": 0, 00:10:47.767 "w_mbytes_per_sec": 0 00:10:47.767 }, 00:10:47.767 "claimed": true, 00:10:47.767 "claim_type": "exclusive_write", 00:10:47.767 "zoned": false, 00:10:47.767 "supported_io_types": { 00:10:47.767 "read": true, 00:10:47.767 "write": true, 00:10:47.767 "unmap": true, 00:10:47.767 "flush": true, 00:10:47.767 "reset": true, 00:10:47.767 "nvme_admin": false, 00:10:47.767 "nvme_io": false, 00:10:47.767 "nvme_io_md": false, 00:10:47.767 "write_zeroes": true, 00:10:47.767 "zcopy": true, 00:10:47.767 "get_zone_info": false, 00:10:47.767 "zone_management": false, 00:10:47.767 "zone_append": false, 00:10:47.767 "compare": false, 00:10:47.767 "compare_and_write": false, 00:10:47.767 "abort": true, 00:10:47.767 "seek_hole": false, 00:10:47.767 "seek_data": false, 00:10:47.767 "copy": true, 00:10:47.767 "nvme_iov_md": false 00:10:47.767 }, 00:10:47.767 "memory_domains": [ 00:10:47.767 { 00:10:47.767 "dma_device_id": "system", 00:10:47.767 "dma_device_type": 1 00:10:47.767 }, 00:10:47.767 { 00:10:47.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.768 "dma_device_type": 2 00:10:47.768 } 00:10:47.768 ], 00:10:47.768 "driver_specific": {} 00:10:47.768 } 00:10:47.768 ] 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.768 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.768 "name": "Existed_Raid", 00:10:47.768 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:47.768 "strip_size_kb": 64, 00:10:47.768 "state": "configuring", 00:10:47.768 "raid_level": "concat", 00:10:47.768 "superblock": true, 00:10:47.768 "num_base_bdevs": 4, 00:10:47.768 "num_base_bdevs_discovered": 2, 00:10:47.768 "num_base_bdevs_operational": 4, 00:10:47.768 "base_bdevs_list": [ 00:10:47.768 { 00:10:47.768 "name": "BaseBdev1", 00:10:47.768 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev2", 00:10:47.768 "uuid": "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev3", 00:10:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.768 "is_configured": false, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 0 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev4", 00:10:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.768 "is_configured": false, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 0 00:10:47.768 } 00:10:47.768 ] 00:10:47.768 }' 00:10:48.027 21:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.027 21:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 [2024-11-26 21:18:06.378332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.287 BaseBdev3 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.287 [ 00:10:48.287 { 00:10:48.287 "name": "BaseBdev3", 00:10:48.287 "aliases": [ 00:10:48.287 "3ee450d2-f61f-42ef-926e-d4ddda22ac24" 00:10:48.287 ], 00:10:48.287 "product_name": "Malloc disk", 00:10:48.287 "block_size": 512, 00:10:48.287 "num_blocks": 65536, 00:10:48.287 "uuid": "3ee450d2-f61f-42ef-926e-d4ddda22ac24", 00:10:48.287 "assigned_rate_limits": { 00:10:48.287 "rw_ios_per_sec": 0, 00:10:48.287 "rw_mbytes_per_sec": 0, 00:10:48.287 "r_mbytes_per_sec": 0, 00:10:48.287 "w_mbytes_per_sec": 0 00:10:48.287 }, 00:10:48.287 "claimed": true, 00:10:48.287 "claim_type": "exclusive_write", 00:10:48.287 "zoned": false, 00:10:48.287 "supported_io_types": { 00:10:48.287 "read": true, 00:10:48.287 "write": true, 00:10:48.287 "unmap": true, 00:10:48.287 "flush": true, 00:10:48.287 "reset": true, 00:10:48.287 "nvme_admin": false, 00:10:48.287 "nvme_io": false, 00:10:48.287 "nvme_io_md": false, 00:10:48.287 "write_zeroes": true, 00:10:48.287 "zcopy": true, 00:10:48.287 "get_zone_info": false, 00:10:48.287 "zone_management": false, 00:10:48.287 "zone_append": false, 00:10:48.287 "compare": false, 00:10:48.287 "compare_and_write": false, 00:10:48.287 "abort": true, 00:10:48.287 "seek_hole": false, 00:10:48.287 "seek_data": false, 00:10:48.287 "copy": true, 00:10:48.287 "nvme_iov_md": false 00:10:48.287 }, 00:10:48.287 "memory_domains": [ 00:10:48.287 { 00:10:48.287 "dma_device_id": "system", 00:10:48.287 "dma_device_type": 1 00:10:48.287 }, 00:10:48.287 { 00:10:48.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.287 "dma_device_type": 2 00:10:48.287 } 00:10:48.287 ], 00:10:48.287 "driver_specific": {} 00:10:48.287 } 00:10:48.287 ] 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.287 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.548 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.548 "name": "Existed_Raid", 00:10:48.548 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:48.548 "strip_size_kb": 64, 00:10:48.548 "state": "configuring", 00:10:48.548 "raid_level": "concat", 00:10:48.548 "superblock": true, 00:10:48.548 "num_base_bdevs": 4, 00:10:48.548 "num_base_bdevs_discovered": 3, 00:10:48.548 "num_base_bdevs_operational": 4, 00:10:48.548 "base_bdevs_list": [ 00:10:48.548 { 00:10:48.548 "name": "BaseBdev1", 00:10:48.548 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:48.548 "is_configured": true, 00:10:48.548 "data_offset": 2048, 00:10:48.548 "data_size": 63488 00:10:48.548 }, 00:10:48.548 { 00:10:48.548 "name": "BaseBdev2", 00:10:48.548 "uuid": "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2", 00:10:48.548 "is_configured": true, 00:10:48.548 "data_offset": 2048, 00:10:48.548 "data_size": 63488 00:10:48.548 }, 00:10:48.548 { 00:10:48.548 "name": "BaseBdev3", 00:10:48.548 "uuid": "3ee450d2-f61f-42ef-926e-d4ddda22ac24", 00:10:48.548 "is_configured": true, 00:10:48.548 "data_offset": 2048, 00:10:48.548 "data_size": 63488 00:10:48.548 }, 00:10:48.548 { 00:10:48.548 "name": "BaseBdev4", 00:10:48.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.548 "is_configured": false, 00:10:48.548 "data_offset": 0, 00:10:48.548 "data_size": 0 00:10:48.548 } 00:10:48.548 ] 00:10:48.548 }' 00:10:48.548 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.548 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:48.807 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.807 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 [2024-11-26 21:18:06.916329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.807 [2024-11-26 21:18:06.916671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.808 [2024-11-26 21:18:06.916731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.808 [2024-11-26 21:18:06.917028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.808 [2024-11-26 21:18:06.917228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.808 [2024-11-26 21:18:06.917272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:48.808 BaseBdev4 00:10:48.808 [2024-11-26 21:18:06.917470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.808 [ 00:10:48.808 { 00:10:48.808 "name": "BaseBdev4", 00:10:48.808 "aliases": [ 00:10:48.808 "3078f63b-a843-47a6-856b-e0a224a754b5" 00:10:48.808 ], 00:10:48.808 "product_name": "Malloc disk", 00:10:48.808 "block_size": 512, 00:10:48.808 "num_blocks": 65536, 00:10:48.808 "uuid": "3078f63b-a843-47a6-856b-e0a224a754b5", 00:10:48.808 "assigned_rate_limits": { 00:10:48.808 "rw_ios_per_sec": 0, 00:10:48.808 "rw_mbytes_per_sec": 0, 00:10:48.808 "r_mbytes_per_sec": 0, 00:10:48.808 "w_mbytes_per_sec": 0 00:10:48.808 }, 00:10:48.808 "claimed": true, 00:10:48.808 "claim_type": "exclusive_write", 00:10:48.808 "zoned": false, 00:10:48.808 "supported_io_types": { 00:10:48.808 "read": true, 00:10:48.808 "write": true, 00:10:48.808 "unmap": true, 00:10:48.808 "flush": true, 00:10:48.808 "reset": true, 00:10:48.808 "nvme_admin": false, 00:10:48.808 "nvme_io": false, 00:10:48.808 "nvme_io_md": false, 00:10:48.808 "write_zeroes": true, 00:10:48.808 "zcopy": true, 00:10:48.808 "get_zone_info": false, 00:10:48.808 "zone_management": false, 00:10:48.808 "zone_append": false, 00:10:48.808 "compare": false, 00:10:48.808 "compare_and_write": false, 00:10:48.808 "abort": true, 00:10:48.808 "seek_hole": false, 00:10:48.808 "seek_data": false, 00:10:48.808 "copy": true, 00:10:48.808 "nvme_iov_md": false 00:10:48.808 }, 00:10:48.808 "memory_domains": [ 00:10:48.808 { 00:10:48.808 "dma_device_id": "system", 00:10:48.808 "dma_device_type": 1 00:10:48.808 }, 00:10:48.808 { 00:10:48.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.808 "dma_device_type": 2 00:10:48.808 } 00:10:48.808 ], 00:10:48.808 "driver_specific": {} 00:10:48.808 } 00:10:48.808 ] 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.808 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.067 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.067 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.067 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.067 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.067 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.067 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.067 "name": "Existed_Raid", 00:10:49.067 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:49.067 "strip_size_kb": 64, 00:10:49.067 "state": "online", 00:10:49.067 "raid_level": "concat", 00:10:49.067 "superblock": true, 00:10:49.067 "num_base_bdevs": 4, 00:10:49.067 "num_base_bdevs_discovered": 4, 00:10:49.067 "num_base_bdevs_operational": 4, 00:10:49.067 "base_bdevs_list": [ 00:10:49.067 { 00:10:49.067 "name": "BaseBdev1", 00:10:49.067 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:49.067 "is_configured": true, 00:10:49.067 "data_offset": 2048, 00:10:49.067 "data_size": 63488 00:10:49.067 }, 00:10:49.067 { 00:10:49.067 "name": "BaseBdev2", 00:10:49.068 "uuid": "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2", 00:10:49.068 "is_configured": true, 00:10:49.068 "data_offset": 2048, 00:10:49.068 "data_size": 63488 00:10:49.068 }, 00:10:49.068 { 00:10:49.068 "name": "BaseBdev3", 00:10:49.068 "uuid": "3ee450d2-f61f-42ef-926e-d4ddda22ac24", 00:10:49.068 "is_configured": true, 00:10:49.068 "data_offset": 2048, 00:10:49.068 "data_size": 63488 00:10:49.068 }, 00:10:49.068 { 00:10:49.068 "name": "BaseBdev4", 00:10:49.068 "uuid": "3078f63b-a843-47a6-856b-e0a224a754b5", 00:10:49.068 "is_configured": true, 00:10:49.068 "data_offset": 2048, 00:10:49.068 "data_size": 63488 00:10:49.068 } 00:10:49.068 ] 00:10:49.068 }' 00:10:49.068 21:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.068 21:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.327 [2024-11-26 21:18:07.368012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.327 "name": "Existed_Raid", 00:10:49.327 "aliases": [ 00:10:49.327 "6ef87245-fe88-4a17-9cbb-a427ce3bf95d" 00:10:49.327 ], 00:10:49.327 "product_name": "Raid Volume", 00:10:49.327 "block_size": 512, 00:10:49.327 "num_blocks": 253952, 00:10:49.327 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:49.327 "assigned_rate_limits": { 00:10:49.327 "rw_ios_per_sec": 0, 00:10:49.327 "rw_mbytes_per_sec": 0, 00:10:49.327 "r_mbytes_per_sec": 0, 00:10:49.327 "w_mbytes_per_sec": 0 00:10:49.327 }, 00:10:49.327 "claimed": false, 00:10:49.327 "zoned": false, 00:10:49.327 "supported_io_types": { 00:10:49.327 "read": true, 00:10:49.327 "write": true, 00:10:49.327 "unmap": true, 00:10:49.327 "flush": true, 00:10:49.327 "reset": true, 00:10:49.327 "nvme_admin": false, 00:10:49.327 "nvme_io": false, 00:10:49.327 "nvme_io_md": false, 00:10:49.327 "write_zeroes": true, 00:10:49.327 "zcopy": false, 00:10:49.327 "get_zone_info": false, 00:10:49.327 "zone_management": false, 00:10:49.327 "zone_append": false, 00:10:49.327 "compare": false, 00:10:49.327 "compare_and_write": false, 00:10:49.327 "abort": false, 00:10:49.327 "seek_hole": false, 00:10:49.327 "seek_data": false, 00:10:49.327 "copy": false, 00:10:49.327 "nvme_iov_md": false 00:10:49.327 }, 00:10:49.327 "memory_domains": [ 00:10:49.327 { 00:10:49.327 "dma_device_id": "system", 00:10:49.327 "dma_device_type": 1 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.327 "dma_device_type": 2 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "system", 00:10:49.327 "dma_device_type": 1 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.327 "dma_device_type": 2 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "system", 00:10:49.327 "dma_device_type": 1 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.327 "dma_device_type": 2 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "system", 00:10:49.327 "dma_device_type": 1 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.327 "dma_device_type": 2 00:10:49.327 } 00:10:49.327 ], 00:10:49.327 "driver_specific": { 00:10:49.327 "raid": { 00:10:49.327 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:49.327 "strip_size_kb": 64, 00:10:49.327 "state": "online", 00:10:49.327 "raid_level": "concat", 00:10:49.327 "superblock": true, 00:10:49.327 "num_base_bdevs": 4, 00:10:49.327 "num_base_bdevs_discovered": 4, 00:10:49.327 "num_base_bdevs_operational": 4, 00:10:49.327 "base_bdevs_list": [ 00:10:49.327 { 00:10:49.327 "name": "BaseBdev1", 00:10:49.327 "uuid": "db438002-c496-464a-bd75-4fd14c3b9e49", 00:10:49.327 "is_configured": true, 00:10:49.327 "data_offset": 2048, 00:10:49.327 "data_size": 63488 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "name": "BaseBdev2", 00:10:49.327 "uuid": "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2", 00:10:49.327 "is_configured": true, 00:10:49.327 "data_offset": 2048, 00:10:49.327 "data_size": 63488 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "name": "BaseBdev3", 00:10:49.327 "uuid": "3ee450d2-f61f-42ef-926e-d4ddda22ac24", 00:10:49.327 "is_configured": true, 00:10:49.327 "data_offset": 2048, 00:10:49.327 "data_size": 63488 00:10:49.327 }, 00:10:49.327 { 00:10:49.327 "name": "BaseBdev4", 00:10:49.327 "uuid": "3078f63b-a843-47a6-856b-e0a224a754b5", 00:10:49.327 "is_configured": true, 00:10:49.327 "data_offset": 2048, 00:10:49.327 "data_size": 63488 00:10:49.327 } 00:10:49.327 ] 00:10:49.327 } 00:10:49.327 } 00:10:49.327 }' 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:49.327 BaseBdev2 00:10:49.327 BaseBdev3 00:10:49.327 BaseBdev4' 00:10:49.327 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.587 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 [2024-11-26 21:18:07.655241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.587 [2024-11-26 21:18:07.655317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.587 [2024-11-26 21:18:07.655372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.846 "name": "Existed_Raid", 00:10:49.846 "uuid": "6ef87245-fe88-4a17-9cbb-a427ce3bf95d", 00:10:49.846 "strip_size_kb": 64, 00:10:49.846 "state": "offline", 00:10:49.846 "raid_level": "concat", 00:10:49.846 "superblock": true, 00:10:49.846 "num_base_bdevs": 4, 00:10:49.846 "num_base_bdevs_discovered": 3, 00:10:49.846 "num_base_bdevs_operational": 3, 00:10:49.846 "base_bdevs_list": [ 00:10:49.846 { 00:10:49.846 "name": null, 00:10:49.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.846 "is_configured": false, 00:10:49.846 "data_offset": 0, 00:10:49.846 "data_size": 63488 00:10:49.846 }, 00:10:49.846 { 00:10:49.846 "name": "BaseBdev2", 00:10:49.846 "uuid": "ab6ed4d0-f16e-40c4-b8b1-4ca489e3e4c2", 00:10:49.846 "is_configured": true, 00:10:49.846 "data_offset": 2048, 00:10:49.846 "data_size": 63488 00:10:49.846 }, 00:10:49.846 { 00:10:49.846 "name": "BaseBdev3", 00:10:49.846 "uuid": "3ee450d2-f61f-42ef-926e-d4ddda22ac24", 00:10:49.846 "is_configured": true, 00:10:49.846 "data_offset": 2048, 00:10:49.846 "data_size": 63488 00:10:49.846 }, 00:10:49.846 { 00:10:49.846 "name": "BaseBdev4", 00:10:49.846 "uuid": "3078f63b-a843-47a6-856b-e0a224a754b5", 00:10:49.846 "is_configured": true, 00:10:49.846 "data_offset": 2048, 00:10:49.846 "data_size": 63488 00:10:49.846 } 00:10:49.846 ] 00:10:49.846 }' 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.846 21:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.105 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.105 [2024-11-26 21:18:08.221115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.363 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.363 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.363 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.363 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.363 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.363 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 [2024-11-26 21:18:08.378996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.623 [2024-11-26 21:18:08.531152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:50.623 [2024-11-26 21:18:08.531202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.623 BaseBdev2 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.623 [ 00:10:50.623 { 00:10:50.623 "name": "BaseBdev2", 00:10:50.623 "aliases": [ 00:10:50.623 "8cf1e682-636a-4115-a948-75b29ec123d7" 00:10:50.623 ], 00:10:50.623 "product_name": "Malloc disk", 00:10:50.623 "block_size": 512, 00:10:50.623 "num_blocks": 65536, 00:10:50.623 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:50.623 "assigned_rate_limits": { 00:10:50.623 "rw_ios_per_sec": 0, 00:10:50.623 "rw_mbytes_per_sec": 0, 00:10:50.623 "r_mbytes_per_sec": 0, 00:10:50.623 "w_mbytes_per_sec": 0 00:10:50.623 }, 00:10:50.623 "claimed": false, 00:10:50.623 "zoned": false, 00:10:50.623 "supported_io_types": { 00:10:50.623 "read": true, 00:10:50.623 "write": true, 00:10:50.623 "unmap": true, 00:10:50.623 "flush": true, 00:10:50.623 "reset": true, 00:10:50.623 "nvme_admin": false, 00:10:50.623 "nvme_io": false, 00:10:50.623 "nvme_io_md": false, 00:10:50.623 "write_zeroes": true, 00:10:50.623 "zcopy": true, 00:10:50.623 "get_zone_info": false, 00:10:50.623 "zone_management": false, 00:10:50.623 "zone_append": false, 00:10:50.623 "compare": false, 00:10:50.623 "compare_and_write": false, 00:10:50.623 "abort": true, 00:10:50.623 "seek_hole": false, 00:10:50.623 "seek_data": false, 00:10:50.623 "copy": true, 00:10:50.623 "nvme_iov_md": false 00:10:50.623 }, 00:10:50.623 "memory_domains": [ 00:10:50.623 { 00:10:50.623 "dma_device_id": "system", 00:10:50.623 "dma_device_type": 1 00:10:50.623 }, 00:10:50.623 { 00:10:50.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.623 "dma_device_type": 2 00:10:50.623 } 00:10:50.623 ], 00:10:50.623 "driver_specific": {} 00:10:50.623 } 00:10:50.623 ] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.623 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.883 BaseBdev3 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.883 [ 00:10:50.883 { 00:10:50.883 "name": "BaseBdev3", 00:10:50.883 "aliases": [ 00:10:50.883 "96f30223-3774-4fe7-99ae-fc20c8fcf63c" 00:10:50.883 ], 00:10:50.883 "product_name": "Malloc disk", 00:10:50.883 "block_size": 512, 00:10:50.883 "num_blocks": 65536, 00:10:50.883 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:50.883 "assigned_rate_limits": { 00:10:50.883 "rw_ios_per_sec": 0, 00:10:50.883 "rw_mbytes_per_sec": 0, 00:10:50.883 "r_mbytes_per_sec": 0, 00:10:50.883 "w_mbytes_per_sec": 0 00:10:50.883 }, 00:10:50.883 "claimed": false, 00:10:50.883 "zoned": false, 00:10:50.883 "supported_io_types": { 00:10:50.883 "read": true, 00:10:50.883 "write": true, 00:10:50.883 "unmap": true, 00:10:50.883 "flush": true, 00:10:50.883 "reset": true, 00:10:50.883 "nvme_admin": false, 00:10:50.883 "nvme_io": false, 00:10:50.883 "nvme_io_md": false, 00:10:50.883 "write_zeroes": true, 00:10:50.883 "zcopy": true, 00:10:50.883 "get_zone_info": false, 00:10:50.883 "zone_management": false, 00:10:50.883 "zone_append": false, 00:10:50.883 "compare": false, 00:10:50.883 "compare_and_write": false, 00:10:50.883 "abort": true, 00:10:50.883 "seek_hole": false, 00:10:50.883 "seek_data": false, 00:10:50.883 "copy": true, 00:10:50.883 "nvme_iov_md": false 00:10:50.883 }, 00:10:50.883 "memory_domains": [ 00:10:50.883 { 00:10:50.883 "dma_device_id": "system", 00:10:50.883 "dma_device_type": 1 00:10:50.883 }, 00:10:50.883 { 00:10:50.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.883 "dma_device_type": 2 00:10:50.883 } 00:10:50.883 ], 00:10:50.883 "driver_specific": {} 00:10:50.883 } 00:10:50.883 ] 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.883 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 BaseBdev4 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 [ 00:10:50.884 { 00:10:50.884 "name": "BaseBdev4", 00:10:50.884 "aliases": [ 00:10:50.884 "db4375d5-7146-44a0-9d64-afa7916996b9" 00:10:50.884 ], 00:10:50.884 "product_name": "Malloc disk", 00:10:50.884 "block_size": 512, 00:10:50.884 "num_blocks": 65536, 00:10:50.884 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:50.884 "assigned_rate_limits": { 00:10:50.884 "rw_ios_per_sec": 0, 00:10:50.884 "rw_mbytes_per_sec": 0, 00:10:50.884 "r_mbytes_per_sec": 0, 00:10:50.884 "w_mbytes_per_sec": 0 00:10:50.884 }, 00:10:50.884 "claimed": false, 00:10:50.884 "zoned": false, 00:10:50.884 "supported_io_types": { 00:10:50.884 "read": true, 00:10:50.884 "write": true, 00:10:50.884 "unmap": true, 00:10:50.884 "flush": true, 00:10:50.884 "reset": true, 00:10:50.884 "nvme_admin": false, 00:10:50.884 "nvme_io": false, 00:10:50.884 "nvme_io_md": false, 00:10:50.884 "write_zeroes": true, 00:10:50.884 "zcopy": true, 00:10:50.884 "get_zone_info": false, 00:10:50.884 "zone_management": false, 00:10:50.884 "zone_append": false, 00:10:50.884 "compare": false, 00:10:50.884 "compare_and_write": false, 00:10:50.884 "abort": true, 00:10:50.884 "seek_hole": false, 00:10:50.884 "seek_data": false, 00:10:50.884 "copy": true, 00:10:50.884 "nvme_iov_md": false 00:10:50.884 }, 00:10:50.884 "memory_domains": [ 00:10:50.884 { 00:10:50.884 "dma_device_id": "system", 00:10:50.884 "dma_device_type": 1 00:10:50.884 }, 00:10:50.884 { 00:10:50.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.884 "dma_device_type": 2 00:10:50.884 } 00:10:50.884 ], 00:10:50.884 "driver_specific": {} 00:10:50.884 } 00:10:50.884 ] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 [2024-11-26 21:18:08.929373] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.884 [2024-11-26 21:18:08.929475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.884 [2024-11-26 21:18:08.929518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.884 [2024-11-26 21:18:08.931351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.884 [2024-11-26 21:18:08.931442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.884 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.884 "name": "Existed_Raid", 00:10:50.884 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:50.884 "strip_size_kb": 64, 00:10:50.884 "state": "configuring", 00:10:50.884 "raid_level": "concat", 00:10:50.884 "superblock": true, 00:10:50.884 "num_base_bdevs": 4, 00:10:50.884 "num_base_bdevs_discovered": 3, 00:10:50.884 "num_base_bdevs_operational": 4, 00:10:50.884 "base_bdevs_list": [ 00:10:50.884 { 00:10:50.884 "name": "BaseBdev1", 00:10:50.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.884 "is_configured": false, 00:10:50.884 "data_offset": 0, 00:10:50.884 "data_size": 0 00:10:50.884 }, 00:10:50.884 { 00:10:50.884 "name": "BaseBdev2", 00:10:50.884 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:50.884 "is_configured": true, 00:10:50.884 "data_offset": 2048, 00:10:50.884 "data_size": 63488 00:10:50.885 }, 00:10:50.885 { 00:10:50.885 "name": "BaseBdev3", 00:10:50.885 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:50.885 "is_configured": true, 00:10:50.885 "data_offset": 2048, 00:10:50.885 "data_size": 63488 00:10:50.885 }, 00:10:50.885 { 00:10:50.885 "name": "BaseBdev4", 00:10:50.885 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:50.885 "is_configured": true, 00:10:50.885 "data_offset": 2048, 00:10:50.885 "data_size": 63488 00:10:50.885 } 00:10:50.885 ] 00:10:50.885 }' 00:10:50.885 21:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.885 21:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.462 [2024-11-26 21:18:09.360641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.462 "name": "Existed_Raid", 00:10:51.462 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:51.462 "strip_size_kb": 64, 00:10:51.462 "state": "configuring", 00:10:51.462 "raid_level": "concat", 00:10:51.462 "superblock": true, 00:10:51.462 "num_base_bdevs": 4, 00:10:51.462 "num_base_bdevs_discovered": 2, 00:10:51.462 "num_base_bdevs_operational": 4, 00:10:51.462 "base_bdevs_list": [ 00:10:51.462 { 00:10:51.462 "name": "BaseBdev1", 00:10:51.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.462 "is_configured": false, 00:10:51.462 "data_offset": 0, 00:10:51.462 "data_size": 0 00:10:51.462 }, 00:10:51.462 { 00:10:51.462 "name": null, 00:10:51.462 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:51.462 "is_configured": false, 00:10:51.462 "data_offset": 0, 00:10:51.462 "data_size": 63488 00:10:51.462 }, 00:10:51.462 { 00:10:51.462 "name": "BaseBdev3", 00:10:51.462 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:51.462 "is_configured": true, 00:10:51.462 "data_offset": 2048, 00:10:51.462 "data_size": 63488 00:10:51.462 }, 00:10:51.462 { 00:10:51.462 "name": "BaseBdev4", 00:10:51.462 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:51.462 "is_configured": true, 00:10:51.462 "data_offset": 2048, 00:10:51.462 "data_size": 63488 00:10:51.462 } 00:10:51.462 ] 00:10:51.462 }' 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.462 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.721 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.981 [2024-11-26 21:18:09.881258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.981 BaseBdev1 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.981 [ 00:10:51.981 { 00:10:51.981 "name": "BaseBdev1", 00:10:51.981 "aliases": [ 00:10:51.981 "a6b227ab-ea0f-4e0c-8816-e960f20a2650" 00:10:51.981 ], 00:10:51.981 "product_name": "Malloc disk", 00:10:51.981 "block_size": 512, 00:10:51.981 "num_blocks": 65536, 00:10:51.981 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:51.981 "assigned_rate_limits": { 00:10:51.981 "rw_ios_per_sec": 0, 00:10:51.981 "rw_mbytes_per_sec": 0, 00:10:51.981 "r_mbytes_per_sec": 0, 00:10:51.981 "w_mbytes_per_sec": 0 00:10:51.981 }, 00:10:51.981 "claimed": true, 00:10:51.981 "claim_type": "exclusive_write", 00:10:51.981 "zoned": false, 00:10:51.981 "supported_io_types": { 00:10:51.981 "read": true, 00:10:51.981 "write": true, 00:10:51.981 "unmap": true, 00:10:51.981 "flush": true, 00:10:51.981 "reset": true, 00:10:51.981 "nvme_admin": false, 00:10:51.981 "nvme_io": false, 00:10:51.981 "nvme_io_md": false, 00:10:51.981 "write_zeroes": true, 00:10:51.981 "zcopy": true, 00:10:51.981 "get_zone_info": false, 00:10:51.981 "zone_management": false, 00:10:51.981 "zone_append": false, 00:10:51.981 "compare": false, 00:10:51.981 "compare_and_write": false, 00:10:51.981 "abort": true, 00:10:51.981 "seek_hole": false, 00:10:51.981 "seek_data": false, 00:10:51.981 "copy": true, 00:10:51.981 "nvme_iov_md": false 00:10:51.981 }, 00:10:51.981 "memory_domains": [ 00:10:51.981 { 00:10:51.981 "dma_device_id": "system", 00:10:51.981 "dma_device_type": 1 00:10:51.981 }, 00:10:51.981 { 00:10:51.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.981 "dma_device_type": 2 00:10:51.981 } 00:10:51.981 ], 00:10:51.981 "driver_specific": {} 00:10:51.981 } 00:10:51.981 ] 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.981 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.981 "name": "Existed_Raid", 00:10:51.981 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:51.981 "strip_size_kb": 64, 00:10:51.981 "state": "configuring", 00:10:51.981 "raid_level": "concat", 00:10:51.981 "superblock": true, 00:10:51.981 "num_base_bdevs": 4, 00:10:51.981 "num_base_bdevs_discovered": 3, 00:10:51.981 "num_base_bdevs_operational": 4, 00:10:51.981 "base_bdevs_list": [ 00:10:51.981 { 00:10:51.981 "name": "BaseBdev1", 00:10:51.982 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:51.982 "is_configured": true, 00:10:51.982 "data_offset": 2048, 00:10:51.982 "data_size": 63488 00:10:51.982 }, 00:10:51.982 { 00:10:51.982 "name": null, 00:10:51.982 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:51.982 "is_configured": false, 00:10:51.982 "data_offset": 0, 00:10:51.982 "data_size": 63488 00:10:51.982 }, 00:10:51.982 { 00:10:51.982 "name": "BaseBdev3", 00:10:51.982 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:51.982 "is_configured": true, 00:10:51.982 "data_offset": 2048, 00:10:51.982 "data_size": 63488 00:10:51.982 }, 00:10:51.982 { 00:10:51.982 "name": "BaseBdev4", 00:10:51.982 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:51.982 "is_configured": true, 00:10:51.982 "data_offset": 2048, 00:10:51.982 "data_size": 63488 00:10:51.982 } 00:10:51.982 ] 00:10:51.982 }' 00:10:51.982 21:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.982 21:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.242 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.501 [2024-11-26 21:18:10.396466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.501 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.502 "name": "Existed_Raid", 00:10:52.502 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:52.502 "strip_size_kb": 64, 00:10:52.502 "state": "configuring", 00:10:52.502 "raid_level": "concat", 00:10:52.502 "superblock": true, 00:10:52.502 "num_base_bdevs": 4, 00:10:52.502 "num_base_bdevs_discovered": 2, 00:10:52.502 "num_base_bdevs_operational": 4, 00:10:52.502 "base_bdevs_list": [ 00:10:52.502 { 00:10:52.502 "name": "BaseBdev1", 00:10:52.502 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:52.502 "is_configured": true, 00:10:52.502 "data_offset": 2048, 00:10:52.502 "data_size": 63488 00:10:52.502 }, 00:10:52.502 { 00:10:52.502 "name": null, 00:10:52.502 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:52.502 "is_configured": false, 00:10:52.502 "data_offset": 0, 00:10:52.502 "data_size": 63488 00:10:52.502 }, 00:10:52.502 { 00:10:52.502 "name": null, 00:10:52.502 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:52.502 "is_configured": false, 00:10:52.502 "data_offset": 0, 00:10:52.502 "data_size": 63488 00:10:52.502 }, 00:10:52.502 { 00:10:52.502 "name": "BaseBdev4", 00:10:52.502 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:52.502 "is_configured": true, 00:10:52.502 "data_offset": 2048, 00:10:52.502 "data_size": 63488 00:10:52.502 } 00:10:52.502 ] 00:10:52.502 }' 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.502 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.762 [2024-11-26 21:18:10.911609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.762 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.022 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.022 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.022 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.022 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.023 "name": "Existed_Raid", 00:10:53.023 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:53.023 "strip_size_kb": 64, 00:10:53.023 "state": "configuring", 00:10:53.023 "raid_level": "concat", 00:10:53.023 "superblock": true, 00:10:53.023 "num_base_bdevs": 4, 00:10:53.023 "num_base_bdevs_discovered": 3, 00:10:53.023 "num_base_bdevs_operational": 4, 00:10:53.023 "base_bdevs_list": [ 00:10:53.023 { 00:10:53.023 "name": "BaseBdev1", 00:10:53.023 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:53.023 "is_configured": true, 00:10:53.023 "data_offset": 2048, 00:10:53.023 "data_size": 63488 00:10:53.023 }, 00:10:53.023 { 00:10:53.023 "name": null, 00:10:53.023 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:53.023 "is_configured": false, 00:10:53.023 "data_offset": 0, 00:10:53.023 "data_size": 63488 00:10:53.023 }, 00:10:53.023 { 00:10:53.023 "name": "BaseBdev3", 00:10:53.023 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:53.023 "is_configured": true, 00:10:53.023 "data_offset": 2048, 00:10:53.023 "data_size": 63488 00:10:53.023 }, 00:10:53.023 { 00:10:53.023 "name": "BaseBdev4", 00:10:53.023 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:53.023 "is_configured": true, 00:10:53.023 "data_offset": 2048, 00:10:53.023 "data_size": 63488 00:10:53.023 } 00:10:53.023 ] 00:10:53.023 }' 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.023 21:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.283 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 [2024-11-26 21:18:11.390811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.543 "name": "Existed_Raid", 00:10:53.543 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:53.543 "strip_size_kb": 64, 00:10:53.543 "state": "configuring", 00:10:53.543 "raid_level": "concat", 00:10:53.543 "superblock": true, 00:10:53.543 "num_base_bdevs": 4, 00:10:53.543 "num_base_bdevs_discovered": 2, 00:10:53.543 "num_base_bdevs_operational": 4, 00:10:53.543 "base_bdevs_list": [ 00:10:53.543 { 00:10:53.543 "name": null, 00:10:53.543 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:53.543 "is_configured": false, 00:10:53.543 "data_offset": 0, 00:10:53.543 "data_size": 63488 00:10:53.543 }, 00:10:53.543 { 00:10:53.543 "name": null, 00:10:53.543 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:53.543 "is_configured": false, 00:10:53.543 "data_offset": 0, 00:10:53.543 "data_size": 63488 00:10:53.543 }, 00:10:53.543 { 00:10:53.543 "name": "BaseBdev3", 00:10:53.543 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:53.543 "is_configured": true, 00:10:53.543 "data_offset": 2048, 00:10:53.543 "data_size": 63488 00:10:53.543 }, 00:10:53.543 { 00:10:53.543 "name": "BaseBdev4", 00:10:53.543 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:53.543 "is_configured": true, 00:10:53.543 "data_offset": 2048, 00:10:53.543 "data_size": 63488 00:10:53.543 } 00:10:53.543 ] 00:10:53.543 }' 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.543 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.802 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.802 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.802 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.061 [2024-11-26 21:18:11.976983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.061 21:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.061 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.061 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.061 "name": "Existed_Raid", 00:10:54.061 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:54.061 "strip_size_kb": 64, 00:10:54.061 "state": "configuring", 00:10:54.061 "raid_level": "concat", 00:10:54.061 "superblock": true, 00:10:54.061 "num_base_bdevs": 4, 00:10:54.061 "num_base_bdevs_discovered": 3, 00:10:54.061 "num_base_bdevs_operational": 4, 00:10:54.061 "base_bdevs_list": [ 00:10:54.061 { 00:10:54.061 "name": null, 00:10:54.061 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:54.061 "is_configured": false, 00:10:54.061 "data_offset": 0, 00:10:54.061 "data_size": 63488 00:10:54.061 }, 00:10:54.061 { 00:10:54.061 "name": "BaseBdev2", 00:10:54.061 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:54.061 "is_configured": true, 00:10:54.061 "data_offset": 2048, 00:10:54.061 "data_size": 63488 00:10:54.061 }, 00:10:54.061 { 00:10:54.061 "name": "BaseBdev3", 00:10:54.061 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:54.061 "is_configured": true, 00:10:54.061 "data_offset": 2048, 00:10:54.061 "data_size": 63488 00:10:54.061 }, 00:10:54.061 { 00:10:54.061 "name": "BaseBdev4", 00:10:54.061 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:54.061 "is_configured": true, 00:10:54.061 "data_offset": 2048, 00:10:54.061 "data_size": 63488 00:10:54.061 } 00:10:54.061 ] 00:10:54.061 }' 00:10:54.061 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.061 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.321 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6b227ab-ea0f-4e0c-8816-e960f20a2650 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 [2024-11-26 21:18:12.524938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:54.582 [2024-11-26 21:18:12.525262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.582 [2024-11-26 21:18:12.525280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:54.582 [2024-11-26 21:18:12.525541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:54.582 NewBaseBdev 00:10:54.582 [2024-11-26 21:18:12.525685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.582 [2024-11-26 21:18:12.525696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:54.582 [2024-11-26 21:18:12.525811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 [ 00:10:54.582 { 00:10:54.582 "name": "NewBaseBdev", 00:10:54.582 "aliases": [ 00:10:54.582 "a6b227ab-ea0f-4e0c-8816-e960f20a2650" 00:10:54.582 ], 00:10:54.582 "product_name": "Malloc disk", 00:10:54.582 "block_size": 512, 00:10:54.582 "num_blocks": 65536, 00:10:54.582 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:54.582 "assigned_rate_limits": { 00:10:54.582 "rw_ios_per_sec": 0, 00:10:54.582 "rw_mbytes_per_sec": 0, 00:10:54.582 "r_mbytes_per_sec": 0, 00:10:54.582 "w_mbytes_per_sec": 0 00:10:54.582 }, 00:10:54.582 "claimed": true, 00:10:54.582 "claim_type": "exclusive_write", 00:10:54.582 "zoned": false, 00:10:54.582 "supported_io_types": { 00:10:54.582 "read": true, 00:10:54.582 "write": true, 00:10:54.582 "unmap": true, 00:10:54.582 "flush": true, 00:10:54.582 "reset": true, 00:10:54.582 "nvme_admin": false, 00:10:54.582 "nvme_io": false, 00:10:54.582 "nvme_io_md": false, 00:10:54.582 "write_zeroes": true, 00:10:54.582 "zcopy": true, 00:10:54.582 "get_zone_info": false, 00:10:54.582 "zone_management": false, 00:10:54.582 "zone_append": false, 00:10:54.582 "compare": false, 00:10:54.582 "compare_and_write": false, 00:10:54.582 "abort": true, 00:10:54.582 "seek_hole": false, 00:10:54.582 "seek_data": false, 00:10:54.582 "copy": true, 00:10:54.582 "nvme_iov_md": false 00:10:54.582 }, 00:10:54.582 "memory_domains": [ 00:10:54.582 { 00:10:54.582 "dma_device_id": "system", 00:10:54.582 "dma_device_type": 1 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.582 "dma_device_type": 2 00:10:54.582 } 00:10:54.582 ], 00:10:54.582 "driver_specific": {} 00:10:54.582 } 00:10:54.582 ] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.582 "name": "Existed_Raid", 00:10:54.582 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:54.582 "strip_size_kb": 64, 00:10:54.582 "state": "online", 00:10:54.582 "raid_level": "concat", 00:10:54.582 "superblock": true, 00:10:54.582 "num_base_bdevs": 4, 00:10:54.582 "num_base_bdevs_discovered": 4, 00:10:54.582 "num_base_bdevs_operational": 4, 00:10:54.582 "base_bdevs_list": [ 00:10:54.582 { 00:10:54.582 "name": "NewBaseBdev", 00:10:54.582 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:54.582 "is_configured": true, 00:10:54.582 "data_offset": 2048, 00:10:54.582 "data_size": 63488 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "name": "BaseBdev2", 00:10:54.582 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:54.582 "is_configured": true, 00:10:54.582 "data_offset": 2048, 00:10:54.582 "data_size": 63488 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "name": "BaseBdev3", 00:10:54.582 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:54.582 "is_configured": true, 00:10:54.582 "data_offset": 2048, 00:10:54.582 "data_size": 63488 00:10:54.582 }, 00:10:54.582 { 00:10:54.582 "name": "BaseBdev4", 00:10:54.582 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:54.582 "is_configured": true, 00:10:54.582 "data_offset": 2048, 00:10:54.582 "data_size": 63488 00:10:54.582 } 00:10:54.582 ] 00:10:54.582 }' 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.582 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.842 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.842 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.842 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.843 [2024-11-26 21:18:12.924660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.843 "name": "Existed_Raid", 00:10:54.843 "aliases": [ 00:10:54.843 "8a44d37a-3156-4d91-b2ce-254403fbdd0b" 00:10:54.843 ], 00:10:54.843 "product_name": "Raid Volume", 00:10:54.843 "block_size": 512, 00:10:54.843 "num_blocks": 253952, 00:10:54.843 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:54.843 "assigned_rate_limits": { 00:10:54.843 "rw_ios_per_sec": 0, 00:10:54.843 "rw_mbytes_per_sec": 0, 00:10:54.843 "r_mbytes_per_sec": 0, 00:10:54.843 "w_mbytes_per_sec": 0 00:10:54.843 }, 00:10:54.843 "claimed": false, 00:10:54.843 "zoned": false, 00:10:54.843 "supported_io_types": { 00:10:54.843 "read": true, 00:10:54.843 "write": true, 00:10:54.843 "unmap": true, 00:10:54.843 "flush": true, 00:10:54.843 "reset": true, 00:10:54.843 "nvme_admin": false, 00:10:54.843 "nvme_io": false, 00:10:54.843 "nvme_io_md": false, 00:10:54.843 "write_zeroes": true, 00:10:54.843 "zcopy": false, 00:10:54.843 "get_zone_info": false, 00:10:54.843 "zone_management": false, 00:10:54.843 "zone_append": false, 00:10:54.843 "compare": false, 00:10:54.843 "compare_and_write": false, 00:10:54.843 "abort": false, 00:10:54.843 "seek_hole": false, 00:10:54.843 "seek_data": false, 00:10:54.843 "copy": false, 00:10:54.843 "nvme_iov_md": false 00:10:54.843 }, 00:10:54.843 "memory_domains": [ 00:10:54.843 { 00:10:54.843 "dma_device_id": "system", 00:10:54.843 "dma_device_type": 1 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.843 "dma_device_type": 2 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "system", 00:10:54.843 "dma_device_type": 1 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.843 "dma_device_type": 2 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "system", 00:10:54.843 "dma_device_type": 1 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.843 "dma_device_type": 2 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "system", 00:10:54.843 "dma_device_type": 1 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.843 "dma_device_type": 2 00:10:54.843 } 00:10:54.843 ], 00:10:54.843 "driver_specific": { 00:10:54.843 "raid": { 00:10:54.843 "uuid": "8a44d37a-3156-4d91-b2ce-254403fbdd0b", 00:10:54.843 "strip_size_kb": 64, 00:10:54.843 "state": "online", 00:10:54.843 "raid_level": "concat", 00:10:54.843 "superblock": true, 00:10:54.843 "num_base_bdevs": 4, 00:10:54.843 "num_base_bdevs_discovered": 4, 00:10:54.843 "num_base_bdevs_operational": 4, 00:10:54.843 "base_bdevs_list": [ 00:10:54.843 { 00:10:54.843 "name": "NewBaseBdev", 00:10:54.843 "uuid": "a6b227ab-ea0f-4e0c-8816-e960f20a2650", 00:10:54.843 "is_configured": true, 00:10:54.843 "data_offset": 2048, 00:10:54.843 "data_size": 63488 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "name": "BaseBdev2", 00:10:54.843 "uuid": "8cf1e682-636a-4115-a948-75b29ec123d7", 00:10:54.843 "is_configured": true, 00:10:54.843 "data_offset": 2048, 00:10:54.843 "data_size": 63488 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "name": "BaseBdev3", 00:10:54.843 "uuid": "96f30223-3774-4fe7-99ae-fc20c8fcf63c", 00:10:54.843 "is_configured": true, 00:10:54.843 "data_offset": 2048, 00:10:54.843 "data_size": 63488 00:10:54.843 }, 00:10:54.843 { 00:10:54.843 "name": "BaseBdev4", 00:10:54.843 "uuid": "db4375d5-7146-44a0-9d64-afa7916996b9", 00:10:54.843 "is_configured": true, 00:10:54.843 "data_offset": 2048, 00:10:54.843 "data_size": 63488 00:10:54.843 } 00:10:54.843 ] 00:10:54.843 } 00:10:54.843 } 00:10:54.843 }' 00:10:54.843 21:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:55.103 BaseBdev2 00:10:55.103 BaseBdev3 00:10:55.103 BaseBdev4' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.103 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.409 [2024-11-26 21:18:13.275725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.409 [2024-11-26 21:18:13.275756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.409 [2024-11-26 21:18:13.275837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.409 [2024-11-26 21:18:13.275914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.409 [2024-11-26 21:18:13.275925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71760 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71760 ']' 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71760 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71760 00:10:55.409 killing process with pid 71760 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71760' 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71760 00:10:55.409 [2024-11-26 21:18:13.323914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.409 21:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71760 00:10:55.668 [2024-11-26 21:18:13.703776] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.046 21:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.046 00:10:57.046 real 0m11.349s 00:10:57.046 user 0m18.016s 00:10:57.046 sys 0m1.992s 00:10:57.046 21:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.046 21:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.046 ************************************ 00:10:57.046 END TEST raid_state_function_test_sb 00:10:57.046 ************************************ 00:10:57.046 21:18:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:57.046 21:18:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.046 21:18:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.046 21:18:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.046 ************************************ 00:10:57.046 START TEST raid_superblock_test 00:10:57.046 ************************************ 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72429 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72429 00:10:57.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72429 ']' 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.046 21:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.046 [2024-11-26 21:18:14.969406] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:57.046 [2024-11-26 21:18:14.969618] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72429 ] 00:10:57.046 [2024-11-26 21:18:15.141915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.306 [2024-11-26 21:18:15.256224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.306 [2024-11-26 21:18:15.458844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.306 [2024-11-26 21:18:15.458943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.876 malloc1 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.876 [2024-11-26 21:18:15.841856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.876 [2024-11-26 21:18:15.841990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.876 [2024-11-26 21:18:15.842018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:57.876 [2024-11-26 21:18:15.842028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.876 [2024-11-26 21:18:15.844086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.876 [2024-11-26 21:18:15.844122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.876 pt1 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.876 malloc2 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.876 [2024-11-26 21:18:15.895890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.876 [2024-11-26 21:18:15.896025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.876 [2024-11-26 21:18:15.896076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:57.876 [2024-11-26 21:18:15.896112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.876 [2024-11-26 21:18:15.898259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.876 [2024-11-26 21:18:15.898339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.876 pt2 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.876 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.876 malloc3 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.877 [2024-11-26 21:18:15.969808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:57.877 [2024-11-26 21:18:15.969899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.877 [2024-11-26 21:18:15.969939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:57.877 [2024-11-26 21:18:15.969980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.877 [2024-11-26 21:18:15.972075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.877 [2024-11-26 21:18:15.972155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:57.877 pt3 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.877 21:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.877 malloc4 00:10:57.877 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.877 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:57.877 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.877 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.877 [2024-11-26 21:18:16.027925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:57.877 [2024-11-26 21:18:16.027990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.877 [2024-11-26 21:18:16.028010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.877 [2024-11-26 21:18:16.028019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.137 [2024-11-26 21:18:16.030074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.137 [2024-11-26 21:18:16.030106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.137 pt4 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.137 [2024-11-26 21:18:16.039932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.137 [2024-11-26 21:18:16.041691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.137 [2024-11-26 21:18:16.041820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.137 [2024-11-26 21:18:16.041899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.137 [2024-11-26 21:18:16.042084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:58.137 [2024-11-26 21:18:16.042096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.137 [2024-11-26 21:18:16.042325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:58.137 [2024-11-26 21:18:16.042487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:58.137 [2024-11-26 21:18:16.042499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:58.137 [2024-11-26 21:18:16.042630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.137 "name": "raid_bdev1", 00:10:58.137 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:10:58.137 "strip_size_kb": 64, 00:10:58.137 "state": "online", 00:10:58.137 "raid_level": "concat", 00:10:58.137 "superblock": true, 00:10:58.137 "num_base_bdevs": 4, 00:10:58.137 "num_base_bdevs_discovered": 4, 00:10:58.137 "num_base_bdevs_operational": 4, 00:10:58.137 "base_bdevs_list": [ 00:10:58.137 { 00:10:58.137 "name": "pt1", 00:10:58.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.137 "is_configured": true, 00:10:58.137 "data_offset": 2048, 00:10:58.137 "data_size": 63488 00:10:58.137 }, 00:10:58.137 { 00:10:58.137 "name": "pt2", 00:10:58.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.137 "is_configured": true, 00:10:58.137 "data_offset": 2048, 00:10:58.137 "data_size": 63488 00:10:58.137 }, 00:10:58.137 { 00:10:58.137 "name": "pt3", 00:10:58.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.137 "is_configured": true, 00:10:58.137 "data_offset": 2048, 00:10:58.137 "data_size": 63488 00:10:58.137 }, 00:10:58.137 { 00:10:58.137 "name": "pt4", 00:10:58.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.137 "is_configured": true, 00:10:58.137 "data_offset": 2048, 00:10:58.137 "data_size": 63488 00:10:58.137 } 00:10:58.137 ] 00:10:58.137 }' 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.137 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.397 [2024-11-26 21:18:16.519421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.397 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.656 "name": "raid_bdev1", 00:10:58.656 "aliases": [ 00:10:58.656 "eda0514c-3dfe-4986-b64f-a426f741dcee" 00:10:58.656 ], 00:10:58.656 "product_name": "Raid Volume", 00:10:58.656 "block_size": 512, 00:10:58.656 "num_blocks": 253952, 00:10:58.656 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:10:58.656 "assigned_rate_limits": { 00:10:58.656 "rw_ios_per_sec": 0, 00:10:58.656 "rw_mbytes_per_sec": 0, 00:10:58.656 "r_mbytes_per_sec": 0, 00:10:58.656 "w_mbytes_per_sec": 0 00:10:58.656 }, 00:10:58.656 "claimed": false, 00:10:58.656 "zoned": false, 00:10:58.656 "supported_io_types": { 00:10:58.656 "read": true, 00:10:58.656 "write": true, 00:10:58.656 "unmap": true, 00:10:58.656 "flush": true, 00:10:58.656 "reset": true, 00:10:58.656 "nvme_admin": false, 00:10:58.656 "nvme_io": false, 00:10:58.656 "nvme_io_md": false, 00:10:58.656 "write_zeroes": true, 00:10:58.656 "zcopy": false, 00:10:58.656 "get_zone_info": false, 00:10:58.656 "zone_management": false, 00:10:58.656 "zone_append": false, 00:10:58.656 "compare": false, 00:10:58.656 "compare_and_write": false, 00:10:58.656 "abort": false, 00:10:58.656 "seek_hole": false, 00:10:58.656 "seek_data": false, 00:10:58.656 "copy": false, 00:10:58.656 "nvme_iov_md": false 00:10:58.656 }, 00:10:58.656 "memory_domains": [ 00:10:58.656 { 00:10:58.656 "dma_device_id": "system", 00:10:58.656 "dma_device_type": 1 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.656 "dma_device_type": 2 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "system", 00:10:58.656 "dma_device_type": 1 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.656 "dma_device_type": 2 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "system", 00:10:58.656 "dma_device_type": 1 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.656 "dma_device_type": 2 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "system", 00:10:58.656 "dma_device_type": 1 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.656 "dma_device_type": 2 00:10:58.656 } 00:10:58.656 ], 00:10:58.656 "driver_specific": { 00:10:58.656 "raid": { 00:10:58.656 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:10:58.656 "strip_size_kb": 64, 00:10:58.656 "state": "online", 00:10:58.656 "raid_level": "concat", 00:10:58.656 "superblock": true, 00:10:58.656 "num_base_bdevs": 4, 00:10:58.656 "num_base_bdevs_discovered": 4, 00:10:58.656 "num_base_bdevs_operational": 4, 00:10:58.656 "base_bdevs_list": [ 00:10:58.656 { 00:10:58.656 "name": "pt1", 00:10:58.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.656 "is_configured": true, 00:10:58.656 "data_offset": 2048, 00:10:58.656 "data_size": 63488 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "name": "pt2", 00:10:58.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.656 "is_configured": true, 00:10:58.656 "data_offset": 2048, 00:10:58.656 "data_size": 63488 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "name": "pt3", 00:10:58.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.656 "is_configured": true, 00:10:58.656 "data_offset": 2048, 00:10:58.656 "data_size": 63488 00:10:58.656 }, 00:10:58.656 { 00:10:58.656 "name": "pt4", 00:10:58.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.656 "is_configured": true, 00:10:58.656 "data_offset": 2048, 00:10:58.656 "data_size": 63488 00:10:58.656 } 00:10:58.656 ] 00:10:58.656 } 00:10:58.656 } 00:10:58.656 }' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.656 pt2 00:10:58.656 pt3 00:10:58.656 pt4' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.656 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.915 [2024-11-26 21:18:16.870728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eda0514c-3dfe-4986-b64f-a426f741dcee 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eda0514c-3dfe-4986-b64f-a426f741dcee ']' 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.915 [2024-11-26 21:18:16.914369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.915 [2024-11-26 21:18:16.914433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.915 [2024-11-26 21:18:16.914523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.915 [2024-11-26 21:18:16.914623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.915 [2024-11-26 21:18:16.914674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:58.915 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.916 21:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.916 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.175 [2024-11-26 21:18:17.078116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:59.175 [2024-11-26 21:18:17.080007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:59.175 [2024-11-26 21:18:17.080105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:59.175 [2024-11-26 21:18:17.080160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:59.175 [2024-11-26 21:18:17.080264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:59.175 [2024-11-26 21:18:17.080357] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:59.175 [2024-11-26 21:18:17.080444] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:59.175 [2024-11-26 21:18:17.080503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:59.175 [2024-11-26 21:18:17.080557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.175 [2024-11-26 21:18:17.080597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:59.175 request: 00:10:59.175 { 00:10:59.175 "name": "raid_bdev1", 00:10:59.175 "raid_level": "concat", 00:10:59.175 "base_bdevs": [ 00:10:59.175 "malloc1", 00:10:59.175 "malloc2", 00:10:59.175 "malloc3", 00:10:59.175 "malloc4" 00:10:59.175 ], 00:10:59.175 "strip_size_kb": 64, 00:10:59.175 "superblock": false, 00:10:59.175 "method": "bdev_raid_create", 00:10:59.175 "req_id": 1 00:10:59.175 } 00:10:59.175 Got JSON-RPC error response 00:10:59.175 response: 00:10:59.175 { 00:10:59.175 "code": -17, 00:10:59.175 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:59.175 } 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:59.175 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.176 [2024-11-26 21:18:17.141998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.176 [2024-11-26 21:18:17.142093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.176 [2024-11-26 21:18:17.142131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:59.176 [2024-11-26 21:18:17.142161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.176 [2024-11-26 21:18:17.144303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.176 [2024-11-26 21:18:17.144381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.176 [2024-11-26 21:18:17.144501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:59.176 [2024-11-26 21:18:17.144592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.176 pt1 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.176 "name": "raid_bdev1", 00:10:59.176 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:10:59.176 "strip_size_kb": 64, 00:10:59.176 "state": "configuring", 00:10:59.176 "raid_level": "concat", 00:10:59.176 "superblock": true, 00:10:59.176 "num_base_bdevs": 4, 00:10:59.176 "num_base_bdevs_discovered": 1, 00:10:59.176 "num_base_bdevs_operational": 4, 00:10:59.176 "base_bdevs_list": [ 00:10:59.176 { 00:10:59.176 "name": "pt1", 00:10:59.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.176 "is_configured": true, 00:10:59.176 "data_offset": 2048, 00:10:59.176 "data_size": 63488 00:10:59.176 }, 00:10:59.176 { 00:10:59.176 "name": null, 00:10:59.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.176 "is_configured": false, 00:10:59.176 "data_offset": 2048, 00:10:59.176 "data_size": 63488 00:10:59.176 }, 00:10:59.176 { 00:10:59.176 "name": null, 00:10:59.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.176 "is_configured": false, 00:10:59.176 "data_offset": 2048, 00:10:59.176 "data_size": 63488 00:10:59.176 }, 00:10:59.176 { 00:10:59.176 "name": null, 00:10:59.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.176 "is_configured": false, 00:10:59.176 "data_offset": 2048, 00:10:59.176 "data_size": 63488 00:10:59.176 } 00:10:59.176 ] 00:10:59.176 }' 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.176 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.746 [2024-11-26 21:18:17.613218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.746 [2024-11-26 21:18:17.613307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.746 [2024-11-26 21:18:17.613326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:59.746 [2024-11-26 21:18:17.613337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.746 [2024-11-26 21:18:17.613778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.746 [2024-11-26 21:18:17.613798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.746 [2024-11-26 21:18:17.613882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:59.746 [2024-11-26 21:18:17.613906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.746 pt2 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.746 [2024-11-26 21:18:17.621217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.746 "name": "raid_bdev1", 00:10:59.746 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:10:59.746 "strip_size_kb": 64, 00:10:59.746 "state": "configuring", 00:10:59.746 "raid_level": "concat", 00:10:59.746 "superblock": true, 00:10:59.746 "num_base_bdevs": 4, 00:10:59.746 "num_base_bdevs_discovered": 1, 00:10:59.746 "num_base_bdevs_operational": 4, 00:10:59.746 "base_bdevs_list": [ 00:10:59.746 { 00:10:59.746 "name": "pt1", 00:10:59.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.746 "is_configured": true, 00:10:59.746 "data_offset": 2048, 00:10:59.746 "data_size": 63488 00:10:59.746 }, 00:10:59.746 { 00:10:59.746 "name": null, 00:10:59.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.746 "is_configured": false, 00:10:59.746 "data_offset": 0, 00:10:59.746 "data_size": 63488 00:10:59.746 }, 00:10:59.746 { 00:10:59.746 "name": null, 00:10:59.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.746 "is_configured": false, 00:10:59.746 "data_offset": 2048, 00:10:59.746 "data_size": 63488 00:10:59.746 }, 00:10:59.746 { 00:10:59.746 "name": null, 00:10:59.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.746 "is_configured": false, 00:10:59.746 "data_offset": 2048, 00:10:59.746 "data_size": 63488 00:10:59.746 } 00:10:59.746 ] 00:10:59.746 }' 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.746 21:18:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 [2024-11-26 21:18:18.068441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.006 [2024-11-26 21:18:18.068580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.006 [2024-11-26 21:18:18.068621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:00.006 [2024-11-26 21:18:18.068653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.006 [2024-11-26 21:18:18.069119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.006 [2024-11-26 21:18:18.069176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.006 [2024-11-26 21:18:18.069288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.006 [2024-11-26 21:18:18.069338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.006 pt2 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.006 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 [2024-11-26 21:18:18.080399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.006 [2024-11-26 21:18:18.080486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.006 [2024-11-26 21:18:18.080520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:00.007 [2024-11-26 21:18:18.080548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.007 [2024-11-26 21:18:18.080939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.007 [2024-11-26 21:18:18.081013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.007 [2024-11-26 21:18:18.081111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.007 [2024-11-26 21:18:18.081176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.007 pt3 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.007 [2024-11-26 21:18:18.092353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:00.007 [2024-11-26 21:18:18.092394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.007 [2024-11-26 21:18:18.092409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:00.007 [2024-11-26 21:18:18.092417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.007 [2024-11-26 21:18:18.092762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.007 [2024-11-26 21:18:18.092778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:00.007 [2024-11-26 21:18:18.092834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:00.007 [2024-11-26 21:18:18.092854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:00.007 [2024-11-26 21:18:18.092999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.007 [2024-11-26 21:18:18.093008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.007 [2024-11-26 21:18:18.093263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:00.007 [2024-11-26 21:18:18.093412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.007 [2024-11-26 21:18:18.093430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:00.007 [2024-11-26 21:18:18.093560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.007 pt4 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.007 "name": "raid_bdev1", 00:11:00.007 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:11:00.007 "strip_size_kb": 64, 00:11:00.007 "state": "online", 00:11:00.007 "raid_level": "concat", 00:11:00.007 "superblock": true, 00:11:00.007 "num_base_bdevs": 4, 00:11:00.007 "num_base_bdevs_discovered": 4, 00:11:00.007 "num_base_bdevs_operational": 4, 00:11:00.007 "base_bdevs_list": [ 00:11:00.007 { 00:11:00.007 "name": "pt1", 00:11:00.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.007 "is_configured": true, 00:11:00.007 "data_offset": 2048, 00:11:00.007 "data_size": 63488 00:11:00.007 }, 00:11:00.007 { 00:11:00.007 "name": "pt2", 00:11:00.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.007 "is_configured": true, 00:11:00.007 "data_offset": 2048, 00:11:00.007 "data_size": 63488 00:11:00.007 }, 00:11:00.007 { 00:11:00.007 "name": "pt3", 00:11:00.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.007 "is_configured": true, 00:11:00.007 "data_offset": 2048, 00:11:00.007 "data_size": 63488 00:11:00.007 }, 00:11:00.007 { 00:11:00.007 "name": "pt4", 00:11:00.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.007 "is_configured": true, 00:11:00.007 "data_offset": 2048, 00:11:00.007 "data_size": 63488 00:11:00.007 } 00:11:00.007 ] 00:11:00.007 }' 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.007 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.577 [2024-11-26 21:18:18.595908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.577 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.577 "name": "raid_bdev1", 00:11:00.577 "aliases": [ 00:11:00.577 "eda0514c-3dfe-4986-b64f-a426f741dcee" 00:11:00.577 ], 00:11:00.577 "product_name": "Raid Volume", 00:11:00.577 "block_size": 512, 00:11:00.577 "num_blocks": 253952, 00:11:00.577 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:11:00.577 "assigned_rate_limits": { 00:11:00.577 "rw_ios_per_sec": 0, 00:11:00.577 "rw_mbytes_per_sec": 0, 00:11:00.577 "r_mbytes_per_sec": 0, 00:11:00.577 "w_mbytes_per_sec": 0 00:11:00.577 }, 00:11:00.577 "claimed": false, 00:11:00.577 "zoned": false, 00:11:00.577 "supported_io_types": { 00:11:00.577 "read": true, 00:11:00.577 "write": true, 00:11:00.577 "unmap": true, 00:11:00.577 "flush": true, 00:11:00.577 "reset": true, 00:11:00.577 "nvme_admin": false, 00:11:00.577 "nvme_io": false, 00:11:00.577 "nvme_io_md": false, 00:11:00.577 "write_zeroes": true, 00:11:00.577 "zcopy": false, 00:11:00.577 "get_zone_info": false, 00:11:00.577 "zone_management": false, 00:11:00.577 "zone_append": false, 00:11:00.577 "compare": false, 00:11:00.577 "compare_and_write": false, 00:11:00.577 "abort": false, 00:11:00.577 "seek_hole": false, 00:11:00.577 "seek_data": false, 00:11:00.577 "copy": false, 00:11:00.578 "nvme_iov_md": false 00:11:00.578 }, 00:11:00.578 "memory_domains": [ 00:11:00.578 { 00:11:00.578 "dma_device_id": "system", 00:11:00.578 "dma_device_type": 1 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.578 "dma_device_type": 2 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "system", 00:11:00.578 "dma_device_type": 1 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.578 "dma_device_type": 2 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "system", 00:11:00.578 "dma_device_type": 1 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.578 "dma_device_type": 2 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "system", 00:11:00.578 "dma_device_type": 1 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.578 "dma_device_type": 2 00:11:00.578 } 00:11:00.578 ], 00:11:00.578 "driver_specific": { 00:11:00.578 "raid": { 00:11:00.578 "uuid": "eda0514c-3dfe-4986-b64f-a426f741dcee", 00:11:00.578 "strip_size_kb": 64, 00:11:00.578 "state": "online", 00:11:00.578 "raid_level": "concat", 00:11:00.578 "superblock": true, 00:11:00.578 "num_base_bdevs": 4, 00:11:00.578 "num_base_bdevs_discovered": 4, 00:11:00.578 "num_base_bdevs_operational": 4, 00:11:00.578 "base_bdevs_list": [ 00:11:00.578 { 00:11:00.578 "name": "pt1", 00:11:00.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.578 "is_configured": true, 00:11:00.578 "data_offset": 2048, 00:11:00.578 "data_size": 63488 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "name": "pt2", 00:11:00.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.578 "is_configured": true, 00:11:00.578 "data_offset": 2048, 00:11:00.578 "data_size": 63488 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "name": "pt3", 00:11:00.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.578 "is_configured": true, 00:11:00.578 "data_offset": 2048, 00:11:00.578 "data_size": 63488 00:11:00.578 }, 00:11:00.578 { 00:11:00.578 "name": "pt4", 00:11:00.578 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.578 "is_configured": true, 00:11:00.578 "data_offset": 2048, 00:11:00.578 "data_size": 63488 00:11:00.578 } 00:11:00.578 ] 00:11:00.578 } 00:11:00.578 } 00:11:00.578 }' 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.578 pt2 00:11:00.578 pt3 00:11:00.578 pt4' 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.578 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.837 [2024-11-26 21:18:18.915331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.837 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eda0514c-3dfe-4986-b64f-a426f741dcee '!=' eda0514c-3dfe-4986-b64f-a426f741dcee ']' 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72429 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72429 ']' 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72429 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72429 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72429' 00:11:00.838 killing process with pid 72429 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72429 00:11:00.838 [2024-11-26 21:18:18.987312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.838 [2024-11-26 21:18:18.987445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.838 21:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72429 00:11:00.838 [2024-11-26 21:18:18.987547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.838 [2024-11-26 21:18:18.987559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:01.404 [2024-11-26 21:18:19.375946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.342 21:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:02.342 00:11:02.342 real 0m5.604s 00:11:02.342 user 0m8.070s 00:11:02.342 sys 0m0.954s 00:11:02.342 21:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.342 21:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.342 ************************************ 00:11:02.342 END TEST raid_superblock_test 00:11:02.342 ************************************ 00:11:02.602 21:18:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:02.602 21:18:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.602 21:18:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.602 21:18:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.602 ************************************ 00:11:02.602 START TEST raid_read_error_test 00:11:02.602 ************************************ 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.B5MPMyhZr6 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72689 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72689 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72689 ']' 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.602 21:18:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.602 [2024-11-26 21:18:20.664627] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:02.602 [2024-11-26 21:18:20.664818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 00:11:02.862 [2024-11-26 21:18:20.838453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.862 [2024-11-26 21:18:20.948075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.124 [2024-11-26 21:18:21.145454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.124 [2024-11-26 21:18:21.145590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.393 BaseBdev1_malloc 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.393 true 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.393 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.393 [2024-11-26 21:18:21.536698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.393 [2024-11-26 21:18:21.536754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.393 [2024-11-26 21:18:21.536772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.393 [2024-11-26 21:18:21.536782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.393 [2024-11-26 21:18:21.538803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.393 [2024-11-26 21:18:21.538844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.665 BaseBdev1 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 BaseBdev2_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 true 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 [2024-11-26 21:18:21.603314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.665 [2024-11-26 21:18:21.603375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.665 [2024-11-26 21:18:21.603392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.665 [2024-11-26 21:18:21.603402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.665 [2024-11-26 21:18:21.605497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.665 [2024-11-26 21:18:21.605625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.665 BaseBdev2 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 BaseBdev3_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 true 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 [2024-11-26 21:18:21.684250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:03.665 [2024-11-26 21:18:21.684302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.665 [2024-11-26 21:18:21.684319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:03.665 [2024-11-26 21:18:21.684330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.665 [2024-11-26 21:18:21.686418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.665 [2024-11-26 21:18:21.686501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.665 BaseBdev3 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 BaseBdev4_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:03.665 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.666 true 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.666 [2024-11-26 21:18:21.750979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:03.666 [2024-11-26 21:18:21.751038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.666 [2024-11-26 21:18:21.751055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.666 [2024-11-26 21:18:21.751065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.666 [2024-11-26 21:18:21.753131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.666 [2024-11-26 21:18:21.753230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:03.666 BaseBdev4 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.666 [2024-11-26 21:18:21.763031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.666 [2024-11-26 21:18:21.764896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.666 [2024-11-26 21:18:21.765022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.666 [2024-11-26 21:18:21.765107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.666 [2024-11-26 21:18:21.765327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:03.666 [2024-11-26 21:18:21.765345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.666 [2024-11-26 21:18:21.765590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:03.666 [2024-11-26 21:18:21.765749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:03.666 [2024-11-26 21:18:21.765760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:03.666 [2024-11-26 21:18:21.765914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.666 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.932 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.932 "name": "raid_bdev1", 00:11:03.932 "uuid": "7fc18072-a300-41e0-a576-77c903db5738", 00:11:03.932 "strip_size_kb": 64, 00:11:03.932 "state": "online", 00:11:03.932 "raid_level": "concat", 00:11:03.932 "superblock": true, 00:11:03.932 "num_base_bdevs": 4, 00:11:03.932 "num_base_bdevs_discovered": 4, 00:11:03.932 "num_base_bdevs_operational": 4, 00:11:03.932 "base_bdevs_list": [ 00:11:03.932 { 00:11:03.932 "name": "BaseBdev1", 00:11:03.932 "uuid": "2055691a-238a-519e-a2cc-5a17d0b79e0b", 00:11:03.932 "is_configured": true, 00:11:03.932 "data_offset": 2048, 00:11:03.932 "data_size": 63488 00:11:03.932 }, 00:11:03.932 { 00:11:03.932 "name": "BaseBdev2", 00:11:03.932 "uuid": "4b6d4b27-edaf-540d-a33e-0340aecdbfb5", 00:11:03.932 "is_configured": true, 00:11:03.932 "data_offset": 2048, 00:11:03.932 "data_size": 63488 00:11:03.932 }, 00:11:03.932 { 00:11:03.932 "name": "BaseBdev3", 00:11:03.932 "uuid": "8d8570ca-fa9a-5e12-96f4-ddd402880b1d", 00:11:03.932 "is_configured": true, 00:11:03.932 "data_offset": 2048, 00:11:03.932 "data_size": 63488 00:11:03.932 }, 00:11:03.932 { 00:11:03.932 "name": "BaseBdev4", 00:11:03.932 "uuid": "f056dfbb-541a-5052-a7e0-439cea56e087", 00:11:03.932 "is_configured": true, 00:11:03.932 "data_offset": 2048, 00:11:03.932 "data_size": 63488 00:11:03.932 } 00:11:03.932 ] 00:11:03.932 }' 00:11:03.932 21:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.932 21:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 21:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.192 21:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.192 [2024-11-26 21:18:22.279499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:05.130 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.131 "name": "raid_bdev1", 00:11:05.131 "uuid": "7fc18072-a300-41e0-a576-77c903db5738", 00:11:05.131 "strip_size_kb": 64, 00:11:05.131 "state": "online", 00:11:05.131 "raid_level": "concat", 00:11:05.131 "superblock": true, 00:11:05.131 "num_base_bdevs": 4, 00:11:05.131 "num_base_bdevs_discovered": 4, 00:11:05.131 "num_base_bdevs_operational": 4, 00:11:05.131 "base_bdevs_list": [ 00:11:05.131 { 00:11:05.131 "name": "BaseBdev1", 00:11:05.131 "uuid": "2055691a-238a-519e-a2cc-5a17d0b79e0b", 00:11:05.131 "is_configured": true, 00:11:05.131 "data_offset": 2048, 00:11:05.131 "data_size": 63488 00:11:05.131 }, 00:11:05.131 { 00:11:05.131 "name": "BaseBdev2", 00:11:05.131 "uuid": "4b6d4b27-edaf-540d-a33e-0340aecdbfb5", 00:11:05.131 "is_configured": true, 00:11:05.131 "data_offset": 2048, 00:11:05.131 "data_size": 63488 00:11:05.131 }, 00:11:05.131 { 00:11:05.131 "name": "BaseBdev3", 00:11:05.131 "uuid": "8d8570ca-fa9a-5e12-96f4-ddd402880b1d", 00:11:05.131 "is_configured": true, 00:11:05.131 "data_offset": 2048, 00:11:05.131 "data_size": 63488 00:11:05.131 }, 00:11:05.131 { 00:11:05.131 "name": "BaseBdev4", 00:11:05.131 "uuid": "f056dfbb-541a-5052-a7e0-439cea56e087", 00:11:05.131 "is_configured": true, 00:11:05.131 "data_offset": 2048, 00:11:05.131 "data_size": 63488 00:11:05.131 } 00:11:05.131 ] 00:11:05.131 }' 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.131 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.699 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.699 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.699 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.699 [2024-11-26 21:18:23.643452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.699 [2024-11-26 21:18:23.643488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.699 [2024-11-26 21:18:23.646218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.699 [2024-11-26 21:18:23.646281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.699 [2024-11-26 21:18:23.646323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.699 [2024-11-26 21:18:23.646338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:05.699 { 00:11:05.699 "results": [ 00:11:05.699 { 00:11:05.699 "job": "raid_bdev1", 00:11:05.699 "core_mask": "0x1", 00:11:05.699 "workload": "randrw", 00:11:05.700 "percentage": 50, 00:11:05.700 "status": "finished", 00:11:05.700 "queue_depth": 1, 00:11:05.700 "io_size": 131072, 00:11:05.700 "runtime": 1.364822, 00:11:05.700 "iops": 15810.120294074979, 00:11:05.700 "mibps": 1976.2650367593724, 00:11:05.700 "io_failed": 1, 00:11:05.700 "io_timeout": 0, 00:11:05.700 "avg_latency_us": 87.56861019052367, 00:11:05.700 "min_latency_us": 25.9353711790393, 00:11:05.700 "max_latency_us": 1359.3711790393013 00:11:05.700 } 00:11:05.700 ], 00:11:05.700 "core_count": 1 00:11:05.700 } 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72689 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72689 ']' 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72689 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72689 00:11:05.700 killing process with pid 72689 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72689' 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72689 00:11:05.700 [2024-11-26 21:18:23.675354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.700 21:18:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72689 00:11:05.959 [2024-11-26 21:18:23.985636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.B5MPMyhZr6 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:07.340 00:11:07.340 real 0m4.565s 00:11:07.340 user 0m5.362s 00:11:07.340 sys 0m0.561s 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.340 21:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.340 ************************************ 00:11:07.340 END TEST raid_read_error_test 00:11:07.340 ************************************ 00:11:07.340 21:18:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:07.340 21:18:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.340 21:18:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.340 21:18:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.340 ************************************ 00:11:07.340 START TEST raid_write_error_test 00:11:07.340 ************************************ 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M2XPdRBNzn 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72837 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72837 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72837 ']' 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.340 21:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.340 [2024-11-26 21:18:25.306869] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:07.340 [2024-11-26 21:18:25.307093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72837 ] 00:11:07.340 [2024-11-26 21:18:25.485325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.599 [2024-11-26 21:18:25.595809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.858 [2024-11-26 21:18:25.790592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.858 [2024-11-26 21:18:25.790712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.119 BaseBdev1_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.119 true 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.119 [2024-11-26 21:18:26.192318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.119 [2024-11-26 21:18:26.192427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.119 [2024-11-26 21:18:26.192453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.119 [2024-11-26 21:18:26.192480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.119 [2024-11-26 21:18:26.194810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.119 [2024-11-26 21:18:26.194851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.119 BaseBdev1 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.119 BaseBdev2_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.119 true 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.119 [2024-11-26 21:18:26.258170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.119 [2024-11-26 21:18:26.258224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.119 [2024-11-26 21:18:26.258266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.119 [2024-11-26 21:18:26.258276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.119 [2024-11-26 21:18:26.260317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.119 [2024-11-26 21:18:26.260393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.119 BaseBdev2 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.119 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 BaseBdev3_malloc 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 true 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 [2024-11-26 21:18:26.337939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.380 [2024-11-26 21:18:26.338035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.380 [2024-11-26 21:18:26.338056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.380 [2024-11-26 21:18:26.338066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.380 [2024-11-26 21:18:26.340085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.380 [2024-11-26 21:18:26.340181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.380 BaseBdev3 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 BaseBdev4_malloc 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 true 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 [2024-11-26 21:18:26.403938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:08.380 [2024-11-26 21:18:26.403998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.380 [2024-11-26 21:18:26.404014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.380 [2024-11-26 21:18:26.404024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.380 [2024-11-26 21:18:26.406077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.380 [2024-11-26 21:18:26.406114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:08.380 BaseBdev4 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 [2024-11-26 21:18:26.416017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.380 [2024-11-26 21:18:26.417813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.380 [2024-11-26 21:18:26.417883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.380 [2024-11-26 21:18:26.417941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.380 [2024-11-26 21:18:26.418175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:08.380 [2024-11-26 21:18:26.418190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.380 [2024-11-26 21:18:26.418419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:08.380 [2024-11-26 21:18:26.418580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:08.380 [2024-11-26 21:18:26.418590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:08.380 [2024-11-26 21:18:26.418735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.380 "name": "raid_bdev1", 00:11:08.380 "uuid": "d4204588-166b-4f8e-897c-4152f4be3596", 00:11:08.380 "strip_size_kb": 64, 00:11:08.380 "state": "online", 00:11:08.380 "raid_level": "concat", 00:11:08.380 "superblock": true, 00:11:08.380 "num_base_bdevs": 4, 00:11:08.380 "num_base_bdevs_discovered": 4, 00:11:08.380 "num_base_bdevs_operational": 4, 00:11:08.380 "base_bdevs_list": [ 00:11:08.380 { 00:11:08.380 "name": "BaseBdev1", 00:11:08.380 "uuid": "b9350676-77e4-5716-b848-515c461bdf0a", 00:11:08.380 "is_configured": true, 00:11:08.380 "data_offset": 2048, 00:11:08.380 "data_size": 63488 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "name": "BaseBdev2", 00:11:08.380 "uuid": "6d66292b-7e50-5c6a-9996-3e1a1137f882", 00:11:08.380 "is_configured": true, 00:11:08.380 "data_offset": 2048, 00:11:08.380 "data_size": 63488 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "name": "BaseBdev3", 00:11:08.380 "uuid": "92745146-2b5f-546e-859a-3a271d543635", 00:11:08.380 "is_configured": true, 00:11:08.380 "data_offset": 2048, 00:11:08.380 "data_size": 63488 00:11:08.380 }, 00:11:08.380 { 00:11:08.380 "name": "BaseBdev4", 00:11:08.380 "uuid": "5ffecb5c-a150-5135-a342-1b4235591806", 00:11:08.380 "is_configured": true, 00:11:08.380 "data_offset": 2048, 00:11:08.380 "data_size": 63488 00:11:08.380 } 00:11:08.380 ] 00:11:08.380 }' 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.380 21:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:08.950 21:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:08.950 [2024-11-26 21:18:26.976289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.890 "name": "raid_bdev1", 00:11:09.890 "uuid": "d4204588-166b-4f8e-897c-4152f4be3596", 00:11:09.890 "strip_size_kb": 64, 00:11:09.890 "state": "online", 00:11:09.890 "raid_level": "concat", 00:11:09.890 "superblock": true, 00:11:09.890 "num_base_bdevs": 4, 00:11:09.890 "num_base_bdevs_discovered": 4, 00:11:09.890 "num_base_bdevs_operational": 4, 00:11:09.890 "base_bdevs_list": [ 00:11:09.890 { 00:11:09.890 "name": "BaseBdev1", 00:11:09.890 "uuid": "b9350676-77e4-5716-b848-515c461bdf0a", 00:11:09.890 "is_configured": true, 00:11:09.890 "data_offset": 2048, 00:11:09.890 "data_size": 63488 00:11:09.890 }, 00:11:09.890 { 00:11:09.890 "name": "BaseBdev2", 00:11:09.890 "uuid": "6d66292b-7e50-5c6a-9996-3e1a1137f882", 00:11:09.890 "is_configured": true, 00:11:09.890 "data_offset": 2048, 00:11:09.890 "data_size": 63488 00:11:09.890 }, 00:11:09.890 { 00:11:09.890 "name": "BaseBdev3", 00:11:09.890 "uuid": "92745146-2b5f-546e-859a-3a271d543635", 00:11:09.890 "is_configured": true, 00:11:09.890 "data_offset": 2048, 00:11:09.890 "data_size": 63488 00:11:09.890 }, 00:11:09.890 { 00:11:09.890 "name": "BaseBdev4", 00:11:09.890 "uuid": "5ffecb5c-a150-5135-a342-1b4235591806", 00:11:09.890 "is_configured": true, 00:11:09.890 "data_offset": 2048, 00:11:09.890 "data_size": 63488 00:11:09.890 } 00:11:09.890 ] 00:11:09.890 }' 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.890 21:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.460 [2024-11-26 21:18:28.344919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.460 [2024-11-26 21:18:28.344979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.460 [2024-11-26 21:18:28.348294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.460 [2024-11-26 21:18:28.348420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.460 [2024-11-26 21:18:28.348477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.460 [2024-11-26 21:18:28.348492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:10.460 { 00:11:10.460 "results": [ 00:11:10.460 { 00:11:10.460 "job": "raid_bdev1", 00:11:10.460 "core_mask": "0x1", 00:11:10.460 "workload": "randrw", 00:11:10.460 "percentage": 50, 00:11:10.460 "status": "finished", 00:11:10.460 "queue_depth": 1, 00:11:10.460 "io_size": 131072, 00:11:10.460 "runtime": 1.369176, 00:11:10.460 "iops": 14063.933343850607, 00:11:10.460 "mibps": 1757.991667981326, 00:11:10.460 "io_failed": 1, 00:11:10.460 "io_timeout": 0, 00:11:10.460 "avg_latency_us": 98.15114072963429, 00:11:10.460 "min_latency_us": 28.17117903930131, 00:11:10.460 "max_latency_us": 1538.235807860262 00:11:10.460 } 00:11:10.460 ], 00:11:10.460 "core_count": 1 00:11:10.460 } 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72837 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72837 ']' 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72837 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72837 00:11:10.460 killing process with pid 72837 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72837' 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72837 00:11:10.460 21:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72837 00:11:10.460 [2024-11-26 21:18:28.388513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.720 [2024-11-26 21:18:28.758429] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.102 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M2XPdRBNzn 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.103 ************************************ 00:11:12.103 END TEST raid_write_error_test 00:11:12.103 ************************************ 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:12.103 00:11:12.103 real 0m4.914s 00:11:12.103 user 0m5.763s 00:11:12.103 sys 0m0.578s 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.103 21:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.103 21:18:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:12.103 21:18:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:12.103 21:18:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.103 21:18:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.103 21:18:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.103 ************************************ 00:11:12.103 START TEST raid_state_function_test 00:11:12.103 ************************************ 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:12.103 Process raid pid: 72989 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72989 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72989' 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72989 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72989 ']' 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.103 21:18:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.363 [2024-11-26 21:18:30.267703] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:12.363 [2024-11-26 21:18:30.267946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.363 [2024-11-26 21:18:30.429450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.623 [2024-11-26 21:18:30.560403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.885 [2024-11-26 21:18:30.782711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.885 [2024-11-26 21:18:30.782755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.143 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.143 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 [2024-11-26 21:18:31.126828] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.144 [2024-11-26 21:18:31.126884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.144 [2024-11-26 21:18:31.126895] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.144 [2024-11-26 21:18:31.126905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.144 [2024-11-26 21:18:31.126911] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.144 [2024-11-26 21:18:31.126921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.144 [2024-11-26 21:18:31.126932] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.144 [2024-11-26 21:18:31.126940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.144 "name": "Existed_Raid", 00:11:13.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.144 "strip_size_kb": 0, 00:11:13.144 "state": "configuring", 00:11:13.144 "raid_level": "raid1", 00:11:13.144 "superblock": false, 00:11:13.144 "num_base_bdevs": 4, 00:11:13.144 "num_base_bdevs_discovered": 0, 00:11:13.144 "num_base_bdevs_operational": 4, 00:11:13.144 "base_bdevs_list": [ 00:11:13.144 { 00:11:13.144 "name": "BaseBdev1", 00:11:13.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.144 "is_configured": false, 00:11:13.144 "data_offset": 0, 00:11:13.144 "data_size": 0 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "name": "BaseBdev2", 00:11:13.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.144 "is_configured": false, 00:11:13.144 "data_offset": 0, 00:11:13.144 "data_size": 0 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "name": "BaseBdev3", 00:11:13.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.144 "is_configured": false, 00:11:13.144 "data_offset": 0, 00:11:13.144 "data_size": 0 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "name": "BaseBdev4", 00:11:13.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.144 "is_configured": false, 00:11:13.144 "data_offset": 0, 00:11:13.144 "data_size": 0 00:11:13.144 } 00:11:13.144 ] 00:11:13.144 }' 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.144 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 [2024-11-26 21:18:31.582036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.714 [2024-11-26 21:18:31.582123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 [2024-11-26 21:18:31.590024] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.714 [2024-11-26 21:18:31.590116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.714 [2024-11-26 21:18:31.590153] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.714 [2024-11-26 21:18:31.590183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.714 [2024-11-26 21:18:31.590212] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.714 [2024-11-26 21:18:31.590239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.714 [2024-11-26 21:18:31.590285] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.714 [2024-11-26 21:18:31.590313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 [2024-11-26 21:18:31.633789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.714 BaseBdev1 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 [ 00:11:13.714 { 00:11:13.714 "name": "BaseBdev1", 00:11:13.714 "aliases": [ 00:11:13.714 "54b75568-40da-4008-820f-a5fc88b3f2bf" 00:11:13.714 ], 00:11:13.714 "product_name": "Malloc disk", 00:11:13.714 "block_size": 512, 00:11:13.714 "num_blocks": 65536, 00:11:13.714 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:13.714 "assigned_rate_limits": { 00:11:13.714 "rw_ios_per_sec": 0, 00:11:13.714 "rw_mbytes_per_sec": 0, 00:11:13.714 "r_mbytes_per_sec": 0, 00:11:13.714 "w_mbytes_per_sec": 0 00:11:13.714 }, 00:11:13.714 "claimed": true, 00:11:13.714 "claim_type": "exclusive_write", 00:11:13.714 "zoned": false, 00:11:13.714 "supported_io_types": { 00:11:13.714 "read": true, 00:11:13.714 "write": true, 00:11:13.714 "unmap": true, 00:11:13.714 "flush": true, 00:11:13.714 "reset": true, 00:11:13.714 "nvme_admin": false, 00:11:13.714 "nvme_io": false, 00:11:13.714 "nvme_io_md": false, 00:11:13.714 "write_zeroes": true, 00:11:13.714 "zcopy": true, 00:11:13.714 "get_zone_info": false, 00:11:13.714 "zone_management": false, 00:11:13.714 "zone_append": false, 00:11:13.714 "compare": false, 00:11:13.714 "compare_and_write": false, 00:11:13.714 "abort": true, 00:11:13.714 "seek_hole": false, 00:11:13.714 "seek_data": false, 00:11:13.714 "copy": true, 00:11:13.714 "nvme_iov_md": false 00:11:13.714 }, 00:11:13.714 "memory_domains": [ 00:11:13.714 { 00:11:13.714 "dma_device_id": "system", 00:11:13.714 "dma_device_type": 1 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.714 "dma_device_type": 2 00:11:13.714 } 00:11:13.714 ], 00:11:13.714 "driver_specific": {} 00:11:13.714 } 00:11:13.714 ] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.714 "name": "Existed_Raid", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "strip_size_kb": 0, 00:11:13.714 "state": "configuring", 00:11:13.714 "raid_level": "raid1", 00:11:13.714 "superblock": false, 00:11:13.714 "num_base_bdevs": 4, 00:11:13.714 "num_base_bdevs_discovered": 1, 00:11:13.714 "num_base_bdevs_operational": 4, 00:11:13.714 "base_bdevs_list": [ 00:11:13.714 { 00:11:13.714 "name": "BaseBdev1", 00:11:13.714 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:13.714 "is_configured": true, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 65536 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "name": "BaseBdev2", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "is_configured": false, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 0 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "name": "BaseBdev3", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "is_configured": false, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 0 00:11:13.714 }, 00:11:13.714 { 00:11:13.714 "name": "BaseBdev4", 00:11:13.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.714 "is_configured": false, 00:11:13.714 "data_offset": 0, 00:11:13.714 "data_size": 0 00:11:13.714 } 00:11:13.714 ] 00:11:13.714 }' 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.714 21:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 [2024-11-26 21:18:32.101108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.975 [2024-11-26 21:18:32.101218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 [2024-11-26 21:18:32.109129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.975 [2024-11-26 21:18:32.111161] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.975 [2024-11-26 21:18:32.111250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.975 [2024-11-26 21:18:32.111284] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.975 [2024-11-26 21:18:32.111314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.975 [2024-11-26 21:18:32.111336] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.975 [2024-11-26 21:18:32.111377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.975 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.236 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.236 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.236 "name": "Existed_Raid", 00:11:14.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.236 "strip_size_kb": 0, 00:11:14.236 "state": "configuring", 00:11:14.236 "raid_level": "raid1", 00:11:14.236 "superblock": false, 00:11:14.236 "num_base_bdevs": 4, 00:11:14.236 "num_base_bdevs_discovered": 1, 00:11:14.236 "num_base_bdevs_operational": 4, 00:11:14.236 "base_bdevs_list": [ 00:11:14.236 { 00:11:14.236 "name": "BaseBdev1", 00:11:14.236 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:14.236 "is_configured": true, 00:11:14.236 "data_offset": 0, 00:11:14.236 "data_size": 65536 00:11:14.236 }, 00:11:14.236 { 00:11:14.236 "name": "BaseBdev2", 00:11:14.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.236 "is_configured": false, 00:11:14.236 "data_offset": 0, 00:11:14.236 "data_size": 0 00:11:14.236 }, 00:11:14.236 { 00:11:14.236 "name": "BaseBdev3", 00:11:14.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.236 "is_configured": false, 00:11:14.236 "data_offset": 0, 00:11:14.236 "data_size": 0 00:11:14.236 }, 00:11:14.236 { 00:11:14.236 "name": "BaseBdev4", 00:11:14.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.236 "is_configured": false, 00:11:14.236 "data_offset": 0, 00:11:14.236 "data_size": 0 00:11:14.236 } 00:11:14.236 ] 00:11:14.236 }' 00:11:14.236 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.236 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.496 BaseBdev2 00:11:14.496 [2024-11-26 21:18:32.579387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.496 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.496 [ 00:11:14.496 { 00:11:14.496 "name": "BaseBdev2", 00:11:14.496 "aliases": [ 00:11:14.496 "b8db875e-1333-4538-80fa-d735ba1c4f75" 00:11:14.496 ], 00:11:14.496 "product_name": "Malloc disk", 00:11:14.496 "block_size": 512, 00:11:14.496 "num_blocks": 65536, 00:11:14.496 "uuid": "b8db875e-1333-4538-80fa-d735ba1c4f75", 00:11:14.496 "assigned_rate_limits": { 00:11:14.496 "rw_ios_per_sec": 0, 00:11:14.496 "rw_mbytes_per_sec": 0, 00:11:14.496 "r_mbytes_per_sec": 0, 00:11:14.496 "w_mbytes_per_sec": 0 00:11:14.496 }, 00:11:14.496 "claimed": true, 00:11:14.496 "claim_type": "exclusive_write", 00:11:14.496 "zoned": false, 00:11:14.496 "supported_io_types": { 00:11:14.496 "read": true, 00:11:14.496 "write": true, 00:11:14.496 "unmap": true, 00:11:14.496 "flush": true, 00:11:14.496 "reset": true, 00:11:14.496 "nvme_admin": false, 00:11:14.496 "nvme_io": false, 00:11:14.496 "nvme_io_md": false, 00:11:14.496 "write_zeroes": true, 00:11:14.496 "zcopy": true, 00:11:14.497 "get_zone_info": false, 00:11:14.497 "zone_management": false, 00:11:14.497 "zone_append": false, 00:11:14.497 "compare": false, 00:11:14.497 "compare_and_write": false, 00:11:14.497 "abort": true, 00:11:14.497 "seek_hole": false, 00:11:14.497 "seek_data": false, 00:11:14.497 "copy": true, 00:11:14.497 "nvme_iov_md": false 00:11:14.497 }, 00:11:14.497 "memory_domains": [ 00:11:14.497 { 00:11:14.497 "dma_device_id": "system", 00:11:14.497 "dma_device_type": 1 00:11:14.497 }, 00:11:14.497 { 00:11:14.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.497 "dma_device_type": 2 00:11:14.497 } 00:11:14.497 ], 00:11:14.497 "driver_specific": {} 00:11:14.497 } 00:11:14.497 ] 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.497 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.757 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.757 "name": "Existed_Raid", 00:11:14.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.757 "strip_size_kb": 0, 00:11:14.757 "state": "configuring", 00:11:14.757 "raid_level": "raid1", 00:11:14.757 "superblock": false, 00:11:14.757 "num_base_bdevs": 4, 00:11:14.757 "num_base_bdevs_discovered": 2, 00:11:14.757 "num_base_bdevs_operational": 4, 00:11:14.757 "base_bdevs_list": [ 00:11:14.757 { 00:11:14.757 "name": "BaseBdev1", 00:11:14.757 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:14.757 "is_configured": true, 00:11:14.757 "data_offset": 0, 00:11:14.757 "data_size": 65536 00:11:14.757 }, 00:11:14.757 { 00:11:14.757 "name": "BaseBdev2", 00:11:14.757 "uuid": "b8db875e-1333-4538-80fa-d735ba1c4f75", 00:11:14.757 "is_configured": true, 00:11:14.757 "data_offset": 0, 00:11:14.757 "data_size": 65536 00:11:14.757 }, 00:11:14.757 { 00:11:14.757 "name": "BaseBdev3", 00:11:14.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.757 "is_configured": false, 00:11:14.757 "data_offset": 0, 00:11:14.757 "data_size": 0 00:11:14.757 }, 00:11:14.757 { 00:11:14.757 "name": "BaseBdev4", 00:11:14.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.757 "is_configured": false, 00:11:14.757 "data_offset": 0, 00:11:14.757 "data_size": 0 00:11:14.757 } 00:11:14.757 ] 00:11:14.757 }' 00:11:14.757 21:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.757 21:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.017 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.017 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.017 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.277 [2024-11-26 21:18:33.174865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.277 BaseBdev3 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.277 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.277 [ 00:11:15.277 { 00:11:15.277 "name": "BaseBdev3", 00:11:15.277 "aliases": [ 00:11:15.277 "1320c591-a64b-4002-aa64-37c66d3378e2" 00:11:15.277 ], 00:11:15.277 "product_name": "Malloc disk", 00:11:15.277 "block_size": 512, 00:11:15.278 "num_blocks": 65536, 00:11:15.278 "uuid": "1320c591-a64b-4002-aa64-37c66d3378e2", 00:11:15.278 "assigned_rate_limits": { 00:11:15.278 "rw_ios_per_sec": 0, 00:11:15.278 "rw_mbytes_per_sec": 0, 00:11:15.278 "r_mbytes_per_sec": 0, 00:11:15.278 "w_mbytes_per_sec": 0 00:11:15.278 }, 00:11:15.278 "claimed": true, 00:11:15.278 "claim_type": "exclusive_write", 00:11:15.278 "zoned": false, 00:11:15.278 "supported_io_types": { 00:11:15.278 "read": true, 00:11:15.278 "write": true, 00:11:15.278 "unmap": true, 00:11:15.278 "flush": true, 00:11:15.278 "reset": true, 00:11:15.278 "nvme_admin": false, 00:11:15.278 "nvme_io": false, 00:11:15.278 "nvme_io_md": false, 00:11:15.278 "write_zeroes": true, 00:11:15.278 "zcopy": true, 00:11:15.278 "get_zone_info": false, 00:11:15.278 "zone_management": false, 00:11:15.278 "zone_append": false, 00:11:15.278 "compare": false, 00:11:15.278 "compare_and_write": false, 00:11:15.278 "abort": true, 00:11:15.278 "seek_hole": false, 00:11:15.278 "seek_data": false, 00:11:15.278 "copy": true, 00:11:15.278 "nvme_iov_md": false 00:11:15.278 }, 00:11:15.278 "memory_domains": [ 00:11:15.278 { 00:11:15.278 "dma_device_id": "system", 00:11:15.278 "dma_device_type": 1 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.278 "dma_device_type": 2 00:11:15.278 } 00:11:15.278 ], 00:11:15.278 "driver_specific": {} 00:11:15.278 } 00:11:15.278 ] 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.278 "name": "Existed_Raid", 00:11:15.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.278 "strip_size_kb": 0, 00:11:15.278 "state": "configuring", 00:11:15.278 "raid_level": "raid1", 00:11:15.278 "superblock": false, 00:11:15.278 "num_base_bdevs": 4, 00:11:15.278 "num_base_bdevs_discovered": 3, 00:11:15.278 "num_base_bdevs_operational": 4, 00:11:15.278 "base_bdevs_list": [ 00:11:15.278 { 00:11:15.278 "name": "BaseBdev1", 00:11:15.278 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:15.278 "is_configured": true, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 65536 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "name": "BaseBdev2", 00:11:15.278 "uuid": "b8db875e-1333-4538-80fa-d735ba1c4f75", 00:11:15.278 "is_configured": true, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 65536 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "name": "BaseBdev3", 00:11:15.278 "uuid": "1320c591-a64b-4002-aa64-37c66d3378e2", 00:11:15.278 "is_configured": true, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 65536 00:11:15.278 }, 00:11:15.278 { 00:11:15.278 "name": "BaseBdev4", 00:11:15.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.278 "is_configured": false, 00:11:15.278 "data_offset": 0, 00:11:15.278 "data_size": 0 00:11:15.278 } 00:11:15.278 ] 00:11:15.278 }' 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.278 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.537 [2024-11-26 21:18:33.670646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.537 [2024-11-26 21:18:33.670801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.537 [2024-11-26 21:18:33.670828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:15.537 [2024-11-26 21:18:33.671152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:15.537 [2024-11-26 21:18:33.671373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.537 [2024-11-26 21:18:33.671423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.537 [2024-11-26 21:18:33.671760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.537 BaseBdev4 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.537 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.797 [ 00:11:15.797 { 00:11:15.797 "name": "BaseBdev4", 00:11:15.797 "aliases": [ 00:11:15.797 "2a28c0b8-5935-4af4-a892-cda1894e89a3" 00:11:15.797 ], 00:11:15.797 "product_name": "Malloc disk", 00:11:15.797 "block_size": 512, 00:11:15.797 "num_blocks": 65536, 00:11:15.797 "uuid": "2a28c0b8-5935-4af4-a892-cda1894e89a3", 00:11:15.797 "assigned_rate_limits": { 00:11:15.797 "rw_ios_per_sec": 0, 00:11:15.797 "rw_mbytes_per_sec": 0, 00:11:15.797 "r_mbytes_per_sec": 0, 00:11:15.797 "w_mbytes_per_sec": 0 00:11:15.797 }, 00:11:15.797 "claimed": true, 00:11:15.797 "claim_type": "exclusive_write", 00:11:15.797 "zoned": false, 00:11:15.797 "supported_io_types": { 00:11:15.797 "read": true, 00:11:15.797 "write": true, 00:11:15.797 "unmap": true, 00:11:15.797 "flush": true, 00:11:15.797 "reset": true, 00:11:15.797 "nvme_admin": false, 00:11:15.797 "nvme_io": false, 00:11:15.797 "nvme_io_md": false, 00:11:15.797 "write_zeroes": true, 00:11:15.797 "zcopy": true, 00:11:15.797 "get_zone_info": false, 00:11:15.797 "zone_management": false, 00:11:15.797 "zone_append": false, 00:11:15.797 "compare": false, 00:11:15.797 "compare_and_write": false, 00:11:15.797 "abort": true, 00:11:15.797 "seek_hole": false, 00:11:15.797 "seek_data": false, 00:11:15.797 "copy": true, 00:11:15.797 "nvme_iov_md": false 00:11:15.797 }, 00:11:15.797 "memory_domains": [ 00:11:15.797 { 00:11:15.797 "dma_device_id": "system", 00:11:15.797 "dma_device_type": 1 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.797 "dma_device_type": 2 00:11:15.797 } 00:11:15.797 ], 00:11:15.797 "driver_specific": {} 00:11:15.797 } 00:11:15.797 ] 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.797 "name": "Existed_Raid", 00:11:15.797 "uuid": "3fd0e766-b682-4046-b29a-d3772a96dcec", 00:11:15.797 "strip_size_kb": 0, 00:11:15.797 "state": "online", 00:11:15.797 "raid_level": "raid1", 00:11:15.797 "superblock": false, 00:11:15.797 "num_base_bdevs": 4, 00:11:15.797 "num_base_bdevs_discovered": 4, 00:11:15.797 "num_base_bdevs_operational": 4, 00:11:15.797 "base_bdevs_list": [ 00:11:15.797 { 00:11:15.797 "name": "BaseBdev1", 00:11:15.797 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:15.797 "is_configured": true, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 65536 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "name": "BaseBdev2", 00:11:15.797 "uuid": "b8db875e-1333-4538-80fa-d735ba1c4f75", 00:11:15.797 "is_configured": true, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 65536 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "name": "BaseBdev3", 00:11:15.797 "uuid": "1320c591-a64b-4002-aa64-37c66d3378e2", 00:11:15.797 "is_configured": true, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 65536 00:11:15.797 }, 00:11:15.797 { 00:11:15.797 "name": "BaseBdev4", 00:11:15.797 "uuid": "2a28c0b8-5935-4af4-a892-cda1894e89a3", 00:11:15.797 "is_configured": true, 00:11:15.797 "data_offset": 0, 00:11:15.797 "data_size": 65536 00:11:15.797 } 00:11:15.797 ] 00:11:15.797 }' 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.797 21:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.056 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.056 [2024-11-26 21:18:34.198259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.316 "name": "Existed_Raid", 00:11:16.316 "aliases": [ 00:11:16.316 "3fd0e766-b682-4046-b29a-d3772a96dcec" 00:11:16.316 ], 00:11:16.316 "product_name": "Raid Volume", 00:11:16.316 "block_size": 512, 00:11:16.316 "num_blocks": 65536, 00:11:16.316 "uuid": "3fd0e766-b682-4046-b29a-d3772a96dcec", 00:11:16.316 "assigned_rate_limits": { 00:11:16.316 "rw_ios_per_sec": 0, 00:11:16.316 "rw_mbytes_per_sec": 0, 00:11:16.316 "r_mbytes_per_sec": 0, 00:11:16.316 "w_mbytes_per_sec": 0 00:11:16.316 }, 00:11:16.316 "claimed": false, 00:11:16.316 "zoned": false, 00:11:16.316 "supported_io_types": { 00:11:16.316 "read": true, 00:11:16.316 "write": true, 00:11:16.316 "unmap": false, 00:11:16.316 "flush": false, 00:11:16.316 "reset": true, 00:11:16.316 "nvme_admin": false, 00:11:16.316 "nvme_io": false, 00:11:16.316 "nvme_io_md": false, 00:11:16.316 "write_zeroes": true, 00:11:16.316 "zcopy": false, 00:11:16.316 "get_zone_info": false, 00:11:16.316 "zone_management": false, 00:11:16.316 "zone_append": false, 00:11:16.316 "compare": false, 00:11:16.316 "compare_and_write": false, 00:11:16.316 "abort": false, 00:11:16.316 "seek_hole": false, 00:11:16.316 "seek_data": false, 00:11:16.316 "copy": false, 00:11:16.316 "nvme_iov_md": false 00:11:16.316 }, 00:11:16.316 "memory_domains": [ 00:11:16.316 { 00:11:16.316 "dma_device_id": "system", 00:11:16.316 "dma_device_type": 1 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.316 "dma_device_type": 2 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "system", 00:11:16.316 "dma_device_type": 1 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.316 "dma_device_type": 2 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "system", 00:11:16.316 "dma_device_type": 1 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.316 "dma_device_type": 2 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "system", 00:11:16.316 "dma_device_type": 1 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.316 "dma_device_type": 2 00:11:16.316 } 00:11:16.316 ], 00:11:16.316 "driver_specific": { 00:11:16.316 "raid": { 00:11:16.316 "uuid": "3fd0e766-b682-4046-b29a-d3772a96dcec", 00:11:16.316 "strip_size_kb": 0, 00:11:16.316 "state": "online", 00:11:16.316 "raid_level": "raid1", 00:11:16.316 "superblock": false, 00:11:16.316 "num_base_bdevs": 4, 00:11:16.316 "num_base_bdevs_discovered": 4, 00:11:16.316 "num_base_bdevs_operational": 4, 00:11:16.316 "base_bdevs_list": [ 00:11:16.316 { 00:11:16.316 "name": "BaseBdev1", 00:11:16.316 "uuid": "54b75568-40da-4008-820f-a5fc88b3f2bf", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev2", 00:11:16.316 "uuid": "b8db875e-1333-4538-80fa-d735ba1c4f75", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev3", 00:11:16.316 "uuid": "1320c591-a64b-4002-aa64-37c66d3378e2", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev4", 00:11:16.316 "uuid": "2a28c0b8-5935-4af4-a892-cda1894e89a3", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 65536 00:11:16.316 } 00:11:16.316 ] 00:11:16.316 } 00:11:16.316 } 00:11:16.316 }' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:16.316 BaseBdev2 00:11:16.316 BaseBdev3 00:11:16.316 BaseBdev4' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.316 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.577 [2024-11-26 21:18:34.489399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.577 "name": "Existed_Raid", 00:11:16.577 "uuid": "3fd0e766-b682-4046-b29a-d3772a96dcec", 00:11:16.577 "strip_size_kb": 0, 00:11:16.577 "state": "online", 00:11:16.577 "raid_level": "raid1", 00:11:16.577 "superblock": false, 00:11:16.577 "num_base_bdevs": 4, 00:11:16.577 "num_base_bdevs_discovered": 3, 00:11:16.577 "num_base_bdevs_operational": 3, 00:11:16.577 "base_bdevs_list": [ 00:11:16.577 { 00:11:16.577 "name": null, 00:11:16.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.577 "is_configured": false, 00:11:16.577 "data_offset": 0, 00:11:16.577 "data_size": 65536 00:11:16.577 }, 00:11:16.577 { 00:11:16.577 "name": "BaseBdev2", 00:11:16.577 "uuid": "b8db875e-1333-4538-80fa-d735ba1c4f75", 00:11:16.577 "is_configured": true, 00:11:16.577 "data_offset": 0, 00:11:16.577 "data_size": 65536 00:11:16.577 }, 00:11:16.577 { 00:11:16.577 "name": "BaseBdev3", 00:11:16.577 "uuid": "1320c591-a64b-4002-aa64-37c66d3378e2", 00:11:16.577 "is_configured": true, 00:11:16.577 "data_offset": 0, 00:11:16.577 "data_size": 65536 00:11:16.577 }, 00:11:16.577 { 00:11:16.577 "name": "BaseBdev4", 00:11:16.577 "uuid": "2a28c0b8-5935-4af4-a892-cda1894e89a3", 00:11:16.577 "is_configured": true, 00:11:16.577 "data_offset": 0, 00:11:16.577 "data_size": 65536 00:11:16.577 } 00:11:16.577 ] 00:11:16.577 }' 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.577 21:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.146 [2024-11-26 21:18:35.103382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.146 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.146 [2024-11-26 21:18:35.267168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.406 [2024-11-26 21:18:35.425444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:17.406 [2024-11-26 21:18:35.425554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.406 [2024-11-26 21:18:35.528878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.406 [2024-11-26 21:18:35.529063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.406 [2024-11-26 21:18:35.529097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.406 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.666 BaseBdev2 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.666 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 [ 00:11:17.667 { 00:11:17.667 "name": "BaseBdev2", 00:11:17.667 "aliases": [ 00:11:17.667 "8ce3f76b-b0f0-4343-8f72-1770572cda1c" 00:11:17.667 ], 00:11:17.667 "product_name": "Malloc disk", 00:11:17.667 "block_size": 512, 00:11:17.667 "num_blocks": 65536, 00:11:17.667 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:17.667 "assigned_rate_limits": { 00:11:17.667 "rw_ios_per_sec": 0, 00:11:17.667 "rw_mbytes_per_sec": 0, 00:11:17.667 "r_mbytes_per_sec": 0, 00:11:17.667 "w_mbytes_per_sec": 0 00:11:17.667 }, 00:11:17.667 "claimed": false, 00:11:17.667 "zoned": false, 00:11:17.667 "supported_io_types": { 00:11:17.667 "read": true, 00:11:17.667 "write": true, 00:11:17.667 "unmap": true, 00:11:17.667 "flush": true, 00:11:17.667 "reset": true, 00:11:17.667 "nvme_admin": false, 00:11:17.667 "nvme_io": false, 00:11:17.667 "nvme_io_md": false, 00:11:17.667 "write_zeroes": true, 00:11:17.667 "zcopy": true, 00:11:17.667 "get_zone_info": false, 00:11:17.667 "zone_management": false, 00:11:17.667 "zone_append": false, 00:11:17.667 "compare": false, 00:11:17.667 "compare_and_write": false, 00:11:17.667 "abort": true, 00:11:17.667 "seek_hole": false, 00:11:17.667 "seek_data": false, 00:11:17.667 "copy": true, 00:11:17.667 "nvme_iov_md": false 00:11:17.667 }, 00:11:17.667 "memory_domains": [ 00:11:17.667 { 00:11:17.667 "dma_device_id": "system", 00:11:17.667 "dma_device_type": 1 00:11:17.667 }, 00:11:17.667 { 00:11:17.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.667 "dma_device_type": 2 00:11:17.667 } 00:11:17.667 ], 00:11:17.667 "driver_specific": {} 00:11:17.667 } 00:11:17.667 ] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 BaseBdev3 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 [ 00:11:17.667 { 00:11:17.667 "name": "BaseBdev3", 00:11:17.667 "aliases": [ 00:11:17.667 "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290" 00:11:17.667 ], 00:11:17.667 "product_name": "Malloc disk", 00:11:17.667 "block_size": 512, 00:11:17.667 "num_blocks": 65536, 00:11:17.667 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:17.667 "assigned_rate_limits": { 00:11:17.667 "rw_ios_per_sec": 0, 00:11:17.667 "rw_mbytes_per_sec": 0, 00:11:17.667 "r_mbytes_per_sec": 0, 00:11:17.667 "w_mbytes_per_sec": 0 00:11:17.667 }, 00:11:17.667 "claimed": false, 00:11:17.667 "zoned": false, 00:11:17.667 "supported_io_types": { 00:11:17.667 "read": true, 00:11:17.667 "write": true, 00:11:17.667 "unmap": true, 00:11:17.667 "flush": true, 00:11:17.667 "reset": true, 00:11:17.667 "nvme_admin": false, 00:11:17.667 "nvme_io": false, 00:11:17.667 "nvme_io_md": false, 00:11:17.667 "write_zeroes": true, 00:11:17.667 "zcopy": true, 00:11:17.667 "get_zone_info": false, 00:11:17.667 "zone_management": false, 00:11:17.667 "zone_append": false, 00:11:17.667 "compare": false, 00:11:17.667 "compare_and_write": false, 00:11:17.667 "abort": true, 00:11:17.667 "seek_hole": false, 00:11:17.667 "seek_data": false, 00:11:17.667 "copy": true, 00:11:17.667 "nvme_iov_md": false 00:11:17.667 }, 00:11:17.667 "memory_domains": [ 00:11:17.667 { 00:11:17.667 "dma_device_id": "system", 00:11:17.667 "dma_device_type": 1 00:11:17.667 }, 00:11:17.667 { 00:11:17.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.667 "dma_device_type": 2 00:11:17.667 } 00:11:17.667 ], 00:11:17.667 "driver_specific": {} 00:11:17.667 } 00:11:17.667 ] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 BaseBdev4 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.667 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.667 [ 00:11:17.667 { 00:11:17.667 "name": "BaseBdev4", 00:11:17.667 "aliases": [ 00:11:17.667 "b8de2d64-69de-47ee-86fd-8073b6e02d88" 00:11:17.667 ], 00:11:17.667 "product_name": "Malloc disk", 00:11:17.667 "block_size": 512, 00:11:17.667 "num_blocks": 65536, 00:11:17.667 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:17.667 "assigned_rate_limits": { 00:11:17.667 "rw_ios_per_sec": 0, 00:11:17.667 "rw_mbytes_per_sec": 0, 00:11:17.667 "r_mbytes_per_sec": 0, 00:11:17.667 "w_mbytes_per_sec": 0 00:11:17.667 }, 00:11:17.667 "claimed": false, 00:11:17.667 "zoned": false, 00:11:17.667 "supported_io_types": { 00:11:17.667 "read": true, 00:11:17.667 "write": true, 00:11:17.667 "unmap": true, 00:11:17.667 "flush": true, 00:11:17.667 "reset": true, 00:11:17.667 "nvme_admin": false, 00:11:17.667 "nvme_io": false, 00:11:17.667 "nvme_io_md": false, 00:11:17.667 "write_zeroes": true, 00:11:17.667 "zcopy": true, 00:11:17.667 "get_zone_info": false, 00:11:17.667 "zone_management": false, 00:11:17.667 "zone_append": false, 00:11:17.667 "compare": false, 00:11:17.667 "compare_and_write": false, 00:11:17.667 "abort": true, 00:11:17.667 "seek_hole": false, 00:11:17.667 "seek_data": false, 00:11:17.667 "copy": true, 00:11:17.667 "nvme_iov_md": false 00:11:17.667 }, 00:11:17.667 "memory_domains": [ 00:11:17.667 { 00:11:17.667 "dma_device_id": "system", 00:11:17.667 "dma_device_type": 1 00:11:17.667 }, 00:11:17.667 { 00:11:17.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.667 "dma_device_type": 2 00:11:17.667 } 00:11:17.667 ], 00:11:17.667 "driver_specific": {} 00:11:17.667 } 00:11:17.668 ] 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.668 [2024-11-26 21:18:35.794464] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.668 [2024-11-26 21:18:35.794513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.668 [2024-11-26 21:18:35.794534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.668 [2024-11-26 21:18:35.796492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.668 [2024-11-26 21:18:35.796544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.668 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.928 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.928 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.928 "name": "Existed_Raid", 00:11:17.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.928 "strip_size_kb": 0, 00:11:17.928 "state": "configuring", 00:11:17.928 "raid_level": "raid1", 00:11:17.928 "superblock": false, 00:11:17.928 "num_base_bdevs": 4, 00:11:17.928 "num_base_bdevs_discovered": 3, 00:11:17.928 "num_base_bdevs_operational": 4, 00:11:17.928 "base_bdevs_list": [ 00:11:17.928 { 00:11:17.928 "name": "BaseBdev1", 00:11:17.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.928 "is_configured": false, 00:11:17.928 "data_offset": 0, 00:11:17.928 "data_size": 0 00:11:17.928 }, 00:11:17.928 { 00:11:17.928 "name": "BaseBdev2", 00:11:17.928 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:17.928 "is_configured": true, 00:11:17.928 "data_offset": 0, 00:11:17.928 "data_size": 65536 00:11:17.928 }, 00:11:17.928 { 00:11:17.928 "name": "BaseBdev3", 00:11:17.928 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:17.928 "is_configured": true, 00:11:17.928 "data_offset": 0, 00:11:17.928 "data_size": 65536 00:11:17.928 }, 00:11:17.928 { 00:11:17.928 "name": "BaseBdev4", 00:11:17.928 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:17.928 "is_configured": true, 00:11:17.928 "data_offset": 0, 00:11:17.928 "data_size": 65536 00:11:17.928 } 00:11:17.928 ] 00:11:17.928 }' 00:11:17.928 21:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.928 21:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.187 [2024-11-26 21:18:36.265700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.187 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.188 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.188 "name": "Existed_Raid", 00:11:18.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.188 "strip_size_kb": 0, 00:11:18.188 "state": "configuring", 00:11:18.188 "raid_level": "raid1", 00:11:18.188 "superblock": false, 00:11:18.188 "num_base_bdevs": 4, 00:11:18.188 "num_base_bdevs_discovered": 2, 00:11:18.188 "num_base_bdevs_operational": 4, 00:11:18.188 "base_bdevs_list": [ 00:11:18.188 { 00:11:18.188 "name": "BaseBdev1", 00:11:18.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.188 "is_configured": false, 00:11:18.188 "data_offset": 0, 00:11:18.188 "data_size": 0 00:11:18.188 }, 00:11:18.188 { 00:11:18.188 "name": null, 00:11:18.188 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:18.188 "is_configured": false, 00:11:18.188 "data_offset": 0, 00:11:18.188 "data_size": 65536 00:11:18.188 }, 00:11:18.188 { 00:11:18.188 "name": "BaseBdev3", 00:11:18.188 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:18.188 "is_configured": true, 00:11:18.188 "data_offset": 0, 00:11:18.188 "data_size": 65536 00:11:18.188 }, 00:11:18.188 { 00:11:18.188 "name": "BaseBdev4", 00:11:18.188 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:18.188 "is_configured": true, 00:11:18.188 "data_offset": 0, 00:11:18.188 "data_size": 65536 00:11:18.188 } 00:11:18.188 ] 00:11:18.188 }' 00:11:18.188 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.188 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.758 [2024-11-26 21:18:36.798697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.758 BaseBdev1 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.758 [ 00:11:18.758 { 00:11:18.758 "name": "BaseBdev1", 00:11:18.758 "aliases": [ 00:11:18.758 "bcb394d6-e245-4239-84a6-48ce799e44bc" 00:11:18.758 ], 00:11:18.758 "product_name": "Malloc disk", 00:11:18.758 "block_size": 512, 00:11:18.758 "num_blocks": 65536, 00:11:18.758 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:18.758 "assigned_rate_limits": { 00:11:18.758 "rw_ios_per_sec": 0, 00:11:18.758 "rw_mbytes_per_sec": 0, 00:11:18.758 "r_mbytes_per_sec": 0, 00:11:18.758 "w_mbytes_per_sec": 0 00:11:18.758 }, 00:11:18.758 "claimed": true, 00:11:18.758 "claim_type": "exclusive_write", 00:11:18.758 "zoned": false, 00:11:18.758 "supported_io_types": { 00:11:18.758 "read": true, 00:11:18.758 "write": true, 00:11:18.758 "unmap": true, 00:11:18.758 "flush": true, 00:11:18.758 "reset": true, 00:11:18.758 "nvme_admin": false, 00:11:18.758 "nvme_io": false, 00:11:18.758 "nvme_io_md": false, 00:11:18.758 "write_zeroes": true, 00:11:18.758 "zcopy": true, 00:11:18.758 "get_zone_info": false, 00:11:18.758 "zone_management": false, 00:11:18.758 "zone_append": false, 00:11:18.758 "compare": false, 00:11:18.758 "compare_and_write": false, 00:11:18.758 "abort": true, 00:11:18.758 "seek_hole": false, 00:11:18.758 "seek_data": false, 00:11:18.758 "copy": true, 00:11:18.758 "nvme_iov_md": false 00:11:18.758 }, 00:11:18.758 "memory_domains": [ 00:11:18.758 { 00:11:18.758 "dma_device_id": "system", 00:11:18.758 "dma_device_type": 1 00:11:18.758 }, 00:11:18.758 { 00:11:18.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.758 "dma_device_type": 2 00:11:18.758 } 00:11:18.758 ], 00:11:18.758 "driver_specific": {} 00:11:18.758 } 00:11:18.758 ] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.758 "name": "Existed_Raid", 00:11:18.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.758 "strip_size_kb": 0, 00:11:18.758 "state": "configuring", 00:11:18.758 "raid_level": "raid1", 00:11:18.758 "superblock": false, 00:11:18.758 "num_base_bdevs": 4, 00:11:18.758 "num_base_bdevs_discovered": 3, 00:11:18.758 "num_base_bdevs_operational": 4, 00:11:18.758 "base_bdevs_list": [ 00:11:18.758 { 00:11:18.758 "name": "BaseBdev1", 00:11:18.758 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:18.758 "is_configured": true, 00:11:18.758 "data_offset": 0, 00:11:18.758 "data_size": 65536 00:11:18.758 }, 00:11:18.758 { 00:11:18.758 "name": null, 00:11:18.758 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:18.758 "is_configured": false, 00:11:18.758 "data_offset": 0, 00:11:18.758 "data_size": 65536 00:11:18.758 }, 00:11:18.758 { 00:11:18.758 "name": "BaseBdev3", 00:11:18.758 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:18.758 "is_configured": true, 00:11:18.758 "data_offset": 0, 00:11:18.758 "data_size": 65536 00:11:18.758 }, 00:11:18.758 { 00:11:18.758 "name": "BaseBdev4", 00:11:18.758 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:18.758 "is_configured": true, 00:11:18.758 "data_offset": 0, 00:11:18.758 "data_size": 65536 00:11:18.758 } 00:11:18.758 ] 00:11:18.758 }' 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.758 21:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 [2024-11-26 21:18:37.305892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.329 "name": "Existed_Raid", 00:11:19.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.329 "strip_size_kb": 0, 00:11:19.329 "state": "configuring", 00:11:19.329 "raid_level": "raid1", 00:11:19.329 "superblock": false, 00:11:19.329 "num_base_bdevs": 4, 00:11:19.329 "num_base_bdevs_discovered": 2, 00:11:19.329 "num_base_bdevs_operational": 4, 00:11:19.329 "base_bdevs_list": [ 00:11:19.329 { 00:11:19.329 "name": "BaseBdev1", 00:11:19.329 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:19.329 "is_configured": true, 00:11:19.329 "data_offset": 0, 00:11:19.329 "data_size": 65536 00:11:19.329 }, 00:11:19.329 { 00:11:19.329 "name": null, 00:11:19.329 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:19.329 "is_configured": false, 00:11:19.329 "data_offset": 0, 00:11:19.329 "data_size": 65536 00:11:19.329 }, 00:11:19.329 { 00:11:19.329 "name": null, 00:11:19.329 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:19.329 "is_configured": false, 00:11:19.329 "data_offset": 0, 00:11:19.329 "data_size": 65536 00:11:19.329 }, 00:11:19.329 { 00:11:19.329 "name": "BaseBdev4", 00:11:19.329 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:19.329 "is_configured": true, 00:11:19.329 "data_offset": 0, 00:11:19.329 "data_size": 65536 00:11:19.329 } 00:11:19.329 ] 00:11:19.329 }' 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.329 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.588 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.588 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.588 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.588 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.588 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.848 [2024-11-26 21:18:37.757094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.848 "name": "Existed_Raid", 00:11:19.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.848 "strip_size_kb": 0, 00:11:19.848 "state": "configuring", 00:11:19.848 "raid_level": "raid1", 00:11:19.848 "superblock": false, 00:11:19.848 "num_base_bdevs": 4, 00:11:19.848 "num_base_bdevs_discovered": 3, 00:11:19.848 "num_base_bdevs_operational": 4, 00:11:19.848 "base_bdevs_list": [ 00:11:19.848 { 00:11:19.848 "name": "BaseBdev1", 00:11:19.848 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:19.848 "is_configured": true, 00:11:19.848 "data_offset": 0, 00:11:19.848 "data_size": 65536 00:11:19.848 }, 00:11:19.848 { 00:11:19.848 "name": null, 00:11:19.848 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:19.848 "is_configured": false, 00:11:19.848 "data_offset": 0, 00:11:19.848 "data_size": 65536 00:11:19.848 }, 00:11:19.848 { 00:11:19.848 "name": "BaseBdev3", 00:11:19.848 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:19.848 "is_configured": true, 00:11:19.848 "data_offset": 0, 00:11:19.848 "data_size": 65536 00:11:19.848 }, 00:11:19.848 { 00:11:19.848 "name": "BaseBdev4", 00:11:19.848 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:19.848 "is_configured": true, 00:11:19.848 "data_offset": 0, 00:11:19.848 "data_size": 65536 00:11:19.848 } 00:11:19.848 ] 00:11:19.848 }' 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.848 21:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.108 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.108 [2024-11-26 21:18:38.252293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.368 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.368 "name": "Existed_Raid", 00:11:20.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.368 "strip_size_kb": 0, 00:11:20.368 "state": "configuring", 00:11:20.368 "raid_level": "raid1", 00:11:20.368 "superblock": false, 00:11:20.368 "num_base_bdevs": 4, 00:11:20.368 "num_base_bdevs_discovered": 2, 00:11:20.368 "num_base_bdevs_operational": 4, 00:11:20.368 "base_bdevs_list": [ 00:11:20.368 { 00:11:20.368 "name": null, 00:11:20.368 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:20.368 "is_configured": false, 00:11:20.368 "data_offset": 0, 00:11:20.368 "data_size": 65536 00:11:20.368 }, 00:11:20.368 { 00:11:20.368 "name": null, 00:11:20.369 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:20.369 "is_configured": false, 00:11:20.369 "data_offset": 0, 00:11:20.369 "data_size": 65536 00:11:20.369 }, 00:11:20.369 { 00:11:20.369 "name": "BaseBdev3", 00:11:20.369 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:20.369 "is_configured": true, 00:11:20.369 "data_offset": 0, 00:11:20.369 "data_size": 65536 00:11:20.369 }, 00:11:20.369 { 00:11:20.369 "name": "BaseBdev4", 00:11:20.369 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:20.369 "is_configured": true, 00:11:20.369 "data_offset": 0, 00:11:20.369 "data_size": 65536 00:11:20.369 } 00:11:20.369 ] 00:11:20.369 }' 00:11:20.369 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.369 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.629 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.629 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.629 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.629 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.890 [2024-11-26 21:18:38.814107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.890 "name": "Existed_Raid", 00:11:20.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.890 "strip_size_kb": 0, 00:11:20.890 "state": "configuring", 00:11:20.890 "raid_level": "raid1", 00:11:20.890 "superblock": false, 00:11:20.890 "num_base_bdevs": 4, 00:11:20.890 "num_base_bdevs_discovered": 3, 00:11:20.890 "num_base_bdevs_operational": 4, 00:11:20.890 "base_bdevs_list": [ 00:11:20.890 { 00:11:20.890 "name": null, 00:11:20.890 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:20.890 "is_configured": false, 00:11:20.890 "data_offset": 0, 00:11:20.890 "data_size": 65536 00:11:20.890 }, 00:11:20.890 { 00:11:20.890 "name": "BaseBdev2", 00:11:20.890 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:20.890 "is_configured": true, 00:11:20.890 "data_offset": 0, 00:11:20.890 "data_size": 65536 00:11:20.890 }, 00:11:20.890 { 00:11:20.890 "name": "BaseBdev3", 00:11:20.890 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:20.890 "is_configured": true, 00:11:20.890 "data_offset": 0, 00:11:20.890 "data_size": 65536 00:11:20.890 }, 00:11:20.890 { 00:11:20.890 "name": "BaseBdev4", 00:11:20.890 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:20.890 "is_configured": true, 00:11:20.890 "data_offset": 0, 00:11:20.890 "data_size": 65536 00:11:20.890 } 00:11:20.890 ] 00:11:20.890 }' 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.890 21:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.150 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.150 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.150 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.150 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.150 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.150 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bcb394d6-e245-4239-84a6-48ce799e44bc 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 [2024-11-26 21:18:39.380568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:21.409 [2024-11-26 21:18:39.380609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:21.409 [2024-11-26 21:18:39.380618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:21.409 [2024-11-26 21:18:39.380855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:21.409 [2024-11-26 21:18:39.381058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:21.409 [2024-11-26 21:18:39.381070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:21.409 [2024-11-26 21:18:39.381337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.409 NewBaseBdev 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.409 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 [ 00:11:21.409 { 00:11:21.409 "name": "NewBaseBdev", 00:11:21.409 "aliases": [ 00:11:21.409 "bcb394d6-e245-4239-84a6-48ce799e44bc" 00:11:21.409 ], 00:11:21.409 "product_name": "Malloc disk", 00:11:21.409 "block_size": 512, 00:11:21.409 "num_blocks": 65536, 00:11:21.409 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:21.409 "assigned_rate_limits": { 00:11:21.409 "rw_ios_per_sec": 0, 00:11:21.409 "rw_mbytes_per_sec": 0, 00:11:21.409 "r_mbytes_per_sec": 0, 00:11:21.409 "w_mbytes_per_sec": 0 00:11:21.409 }, 00:11:21.409 "claimed": true, 00:11:21.409 "claim_type": "exclusive_write", 00:11:21.409 "zoned": false, 00:11:21.409 "supported_io_types": { 00:11:21.409 "read": true, 00:11:21.409 "write": true, 00:11:21.409 "unmap": true, 00:11:21.410 "flush": true, 00:11:21.410 "reset": true, 00:11:21.410 "nvme_admin": false, 00:11:21.410 "nvme_io": false, 00:11:21.410 "nvme_io_md": false, 00:11:21.410 "write_zeroes": true, 00:11:21.410 "zcopy": true, 00:11:21.410 "get_zone_info": false, 00:11:21.410 "zone_management": false, 00:11:21.410 "zone_append": false, 00:11:21.410 "compare": false, 00:11:21.410 "compare_and_write": false, 00:11:21.410 "abort": true, 00:11:21.410 "seek_hole": false, 00:11:21.410 "seek_data": false, 00:11:21.410 "copy": true, 00:11:21.410 "nvme_iov_md": false 00:11:21.410 }, 00:11:21.410 "memory_domains": [ 00:11:21.410 { 00:11:21.410 "dma_device_id": "system", 00:11:21.410 "dma_device_type": 1 00:11:21.410 }, 00:11:21.410 { 00:11:21.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.410 "dma_device_type": 2 00:11:21.410 } 00:11:21.410 ], 00:11:21.410 "driver_specific": {} 00:11:21.410 } 00:11:21.410 ] 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.410 "name": "Existed_Raid", 00:11:21.410 "uuid": "d8c3232d-7fb5-438a-8b92-74bc158d64fb", 00:11:21.410 "strip_size_kb": 0, 00:11:21.410 "state": "online", 00:11:21.410 "raid_level": "raid1", 00:11:21.410 "superblock": false, 00:11:21.410 "num_base_bdevs": 4, 00:11:21.410 "num_base_bdevs_discovered": 4, 00:11:21.410 "num_base_bdevs_operational": 4, 00:11:21.410 "base_bdevs_list": [ 00:11:21.410 { 00:11:21.410 "name": "NewBaseBdev", 00:11:21.410 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:21.410 "is_configured": true, 00:11:21.410 "data_offset": 0, 00:11:21.410 "data_size": 65536 00:11:21.410 }, 00:11:21.410 { 00:11:21.410 "name": "BaseBdev2", 00:11:21.410 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:21.410 "is_configured": true, 00:11:21.410 "data_offset": 0, 00:11:21.410 "data_size": 65536 00:11:21.410 }, 00:11:21.410 { 00:11:21.410 "name": "BaseBdev3", 00:11:21.410 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:21.410 "is_configured": true, 00:11:21.410 "data_offset": 0, 00:11:21.410 "data_size": 65536 00:11:21.410 }, 00:11:21.410 { 00:11:21.410 "name": "BaseBdev4", 00:11:21.410 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:21.410 "is_configured": true, 00:11:21.410 "data_offset": 0, 00:11:21.410 "data_size": 65536 00:11:21.410 } 00:11:21.410 ] 00:11:21.410 }' 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.410 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 [2024-11-26 21:18:39.844238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.980 "name": "Existed_Raid", 00:11:21.980 "aliases": [ 00:11:21.980 "d8c3232d-7fb5-438a-8b92-74bc158d64fb" 00:11:21.980 ], 00:11:21.980 "product_name": "Raid Volume", 00:11:21.980 "block_size": 512, 00:11:21.980 "num_blocks": 65536, 00:11:21.980 "uuid": "d8c3232d-7fb5-438a-8b92-74bc158d64fb", 00:11:21.980 "assigned_rate_limits": { 00:11:21.980 "rw_ios_per_sec": 0, 00:11:21.980 "rw_mbytes_per_sec": 0, 00:11:21.980 "r_mbytes_per_sec": 0, 00:11:21.980 "w_mbytes_per_sec": 0 00:11:21.980 }, 00:11:21.980 "claimed": false, 00:11:21.980 "zoned": false, 00:11:21.980 "supported_io_types": { 00:11:21.980 "read": true, 00:11:21.980 "write": true, 00:11:21.980 "unmap": false, 00:11:21.980 "flush": false, 00:11:21.980 "reset": true, 00:11:21.980 "nvme_admin": false, 00:11:21.980 "nvme_io": false, 00:11:21.980 "nvme_io_md": false, 00:11:21.980 "write_zeroes": true, 00:11:21.980 "zcopy": false, 00:11:21.980 "get_zone_info": false, 00:11:21.980 "zone_management": false, 00:11:21.980 "zone_append": false, 00:11:21.980 "compare": false, 00:11:21.980 "compare_and_write": false, 00:11:21.980 "abort": false, 00:11:21.980 "seek_hole": false, 00:11:21.980 "seek_data": false, 00:11:21.980 "copy": false, 00:11:21.980 "nvme_iov_md": false 00:11:21.980 }, 00:11:21.980 "memory_domains": [ 00:11:21.980 { 00:11:21.980 "dma_device_id": "system", 00:11:21.980 "dma_device_type": 1 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.980 "dma_device_type": 2 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "system", 00:11:21.980 "dma_device_type": 1 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.980 "dma_device_type": 2 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "system", 00:11:21.980 "dma_device_type": 1 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.980 "dma_device_type": 2 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "system", 00:11:21.980 "dma_device_type": 1 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.980 "dma_device_type": 2 00:11:21.980 } 00:11:21.980 ], 00:11:21.980 "driver_specific": { 00:11:21.980 "raid": { 00:11:21.980 "uuid": "d8c3232d-7fb5-438a-8b92-74bc158d64fb", 00:11:21.980 "strip_size_kb": 0, 00:11:21.980 "state": "online", 00:11:21.980 "raid_level": "raid1", 00:11:21.980 "superblock": false, 00:11:21.980 "num_base_bdevs": 4, 00:11:21.980 "num_base_bdevs_discovered": 4, 00:11:21.980 "num_base_bdevs_operational": 4, 00:11:21.980 "base_bdevs_list": [ 00:11:21.980 { 00:11:21.980 "name": "NewBaseBdev", 00:11:21.980 "uuid": "bcb394d6-e245-4239-84a6-48ce799e44bc", 00:11:21.980 "is_configured": true, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 65536 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "name": "BaseBdev2", 00:11:21.980 "uuid": "8ce3f76b-b0f0-4343-8f72-1770572cda1c", 00:11:21.980 "is_configured": true, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 65536 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "name": "BaseBdev3", 00:11:21.980 "uuid": "7b72ffe9-a3e5-4d76-b2c5-21d2ff538290", 00:11:21.980 "is_configured": true, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 65536 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "name": "BaseBdev4", 00:11:21.980 "uuid": "b8de2d64-69de-47ee-86fd-8073b6e02d88", 00:11:21.980 "is_configured": true, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 65536 00:11:21.980 } 00:11:21.980 ] 00:11:21.980 } 00:11:21.980 } 00:11:21.980 }' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.980 BaseBdev2 00:11:21.980 BaseBdev3 00:11:21.980 BaseBdev4' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.980 21:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.240 [2024-11-26 21:18:40.183263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.240 [2024-11-26 21:18:40.183334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.240 [2024-11-26 21:18:40.183416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.240 [2024-11-26 21:18:40.183716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.240 [2024-11-26 21:18:40.183729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72989 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72989 ']' 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72989 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72989 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.240 killing process with pid 72989 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72989' 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72989 00:11:22.240 [2024-11-26 21:18:40.231978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.240 21:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72989 00:11:22.501 [2024-11-26 21:18:40.606019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.921 00:11:23.921 real 0m11.529s 00:11:23.921 user 0m18.323s 00:11:23.921 sys 0m2.005s 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.921 ************************************ 00:11:23.921 END TEST raid_state_function_test 00:11:23.921 ************************************ 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.921 21:18:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:23.921 21:18:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.921 21:18:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.921 21:18:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.921 ************************************ 00:11:23.921 START TEST raid_state_function_test_sb 00:11:23.921 ************************************ 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.921 Process raid pid: 73655 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73655 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73655' 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73655 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73655 ']' 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.921 21:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.921 [2024-11-26 21:18:41.865355] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:23.921 [2024-11-26 21:18:41.865573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.921 [2024-11-26 21:18:42.049763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.181 [2024-11-26 21:18:42.166248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.441 [2024-11-26 21:18:42.374982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.441 [2024-11-26 21:18:42.375090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.701 [2024-11-26 21:18:42.684168] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.701 [2024-11-26 21:18:42.684266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.701 [2024-11-26 21:18:42.684298] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.701 [2024-11-26 21:18:42.684322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.701 [2024-11-26 21:18:42.684341] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.701 [2024-11-26 21:18:42.684362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.701 [2024-11-26 21:18:42.684380] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.701 [2024-11-26 21:18:42.684401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.701 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.702 "name": "Existed_Raid", 00:11:24.702 "uuid": "dc8b0a5e-b01c-4a2d-82a1-e3d9abdaaa9f", 00:11:24.702 "strip_size_kb": 0, 00:11:24.702 "state": "configuring", 00:11:24.702 "raid_level": "raid1", 00:11:24.702 "superblock": true, 00:11:24.702 "num_base_bdevs": 4, 00:11:24.702 "num_base_bdevs_discovered": 0, 00:11:24.702 "num_base_bdevs_operational": 4, 00:11:24.702 "base_bdevs_list": [ 00:11:24.702 { 00:11:24.702 "name": "BaseBdev1", 00:11:24.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.702 "is_configured": false, 00:11:24.702 "data_offset": 0, 00:11:24.702 "data_size": 0 00:11:24.702 }, 00:11:24.702 { 00:11:24.702 "name": "BaseBdev2", 00:11:24.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.702 "is_configured": false, 00:11:24.702 "data_offset": 0, 00:11:24.702 "data_size": 0 00:11:24.702 }, 00:11:24.702 { 00:11:24.702 "name": "BaseBdev3", 00:11:24.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.702 "is_configured": false, 00:11:24.702 "data_offset": 0, 00:11:24.702 "data_size": 0 00:11:24.702 }, 00:11:24.702 { 00:11:24.702 "name": "BaseBdev4", 00:11:24.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.702 "is_configured": false, 00:11:24.702 "data_offset": 0, 00:11:24.702 "data_size": 0 00:11:24.702 } 00:11:24.702 ] 00:11:24.702 }' 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.702 21:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 [2024-11-26 21:18:43.135369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.272 [2024-11-26 21:18:43.135409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 [2024-11-26 21:18:43.143343] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.272 [2024-11-26 21:18:43.143387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.272 [2024-11-26 21:18:43.143396] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.272 [2024-11-26 21:18:43.143406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.272 [2024-11-26 21:18:43.143412] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.272 [2024-11-26 21:18:43.143421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.272 [2024-11-26 21:18:43.143427] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:25.272 [2024-11-26 21:18:43.143435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 [2024-11-26 21:18:43.186221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.272 BaseBdev1 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 [ 00:11:25.272 { 00:11:25.272 "name": "BaseBdev1", 00:11:25.272 "aliases": [ 00:11:25.272 "9db3b24e-7748-4c75-8da2-562f7551ca5b" 00:11:25.272 ], 00:11:25.272 "product_name": "Malloc disk", 00:11:25.272 "block_size": 512, 00:11:25.272 "num_blocks": 65536, 00:11:25.272 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:25.272 "assigned_rate_limits": { 00:11:25.272 "rw_ios_per_sec": 0, 00:11:25.272 "rw_mbytes_per_sec": 0, 00:11:25.272 "r_mbytes_per_sec": 0, 00:11:25.272 "w_mbytes_per_sec": 0 00:11:25.272 }, 00:11:25.272 "claimed": true, 00:11:25.272 "claim_type": "exclusive_write", 00:11:25.272 "zoned": false, 00:11:25.272 "supported_io_types": { 00:11:25.272 "read": true, 00:11:25.272 "write": true, 00:11:25.272 "unmap": true, 00:11:25.272 "flush": true, 00:11:25.272 "reset": true, 00:11:25.272 "nvme_admin": false, 00:11:25.272 "nvme_io": false, 00:11:25.272 "nvme_io_md": false, 00:11:25.272 "write_zeroes": true, 00:11:25.272 "zcopy": true, 00:11:25.272 "get_zone_info": false, 00:11:25.272 "zone_management": false, 00:11:25.272 "zone_append": false, 00:11:25.272 "compare": false, 00:11:25.272 "compare_and_write": false, 00:11:25.272 "abort": true, 00:11:25.272 "seek_hole": false, 00:11:25.272 "seek_data": false, 00:11:25.272 "copy": true, 00:11:25.272 "nvme_iov_md": false 00:11:25.272 }, 00:11:25.272 "memory_domains": [ 00:11:25.272 { 00:11:25.272 "dma_device_id": "system", 00:11:25.272 "dma_device_type": 1 00:11:25.272 }, 00:11:25.272 { 00:11:25.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.272 "dma_device_type": 2 00:11:25.272 } 00:11:25.272 ], 00:11:25.272 "driver_specific": {} 00:11:25.272 } 00:11:25.272 ] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.272 "name": "Existed_Raid", 00:11:25.272 "uuid": "4ff76db8-f8a8-4eee-a3d0-a13b379a0190", 00:11:25.272 "strip_size_kb": 0, 00:11:25.272 "state": "configuring", 00:11:25.272 "raid_level": "raid1", 00:11:25.272 "superblock": true, 00:11:25.272 "num_base_bdevs": 4, 00:11:25.272 "num_base_bdevs_discovered": 1, 00:11:25.272 "num_base_bdevs_operational": 4, 00:11:25.272 "base_bdevs_list": [ 00:11:25.272 { 00:11:25.272 "name": "BaseBdev1", 00:11:25.272 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:25.272 "is_configured": true, 00:11:25.272 "data_offset": 2048, 00:11:25.272 "data_size": 63488 00:11:25.272 }, 00:11:25.272 { 00:11:25.272 "name": "BaseBdev2", 00:11:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.272 "is_configured": false, 00:11:25.272 "data_offset": 0, 00:11:25.272 "data_size": 0 00:11:25.272 }, 00:11:25.272 { 00:11:25.272 "name": "BaseBdev3", 00:11:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.272 "is_configured": false, 00:11:25.272 "data_offset": 0, 00:11:25.272 "data_size": 0 00:11:25.272 }, 00:11:25.272 { 00:11:25.272 "name": "BaseBdev4", 00:11:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.272 "is_configured": false, 00:11:25.272 "data_offset": 0, 00:11:25.272 "data_size": 0 00:11:25.272 } 00:11:25.272 ] 00:11:25.272 }' 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.272 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.532 [2024-11-26 21:18:43.609589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.532 [2024-11-26 21:18:43.609690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.532 [2024-11-26 21:18:43.617621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.532 [2024-11-26 21:18:43.619645] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.532 [2024-11-26 21:18:43.619736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.532 [2024-11-26 21:18:43.619772] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.532 [2024-11-26 21:18:43.619801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.532 [2024-11-26 21:18:43.619824] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:25.532 [2024-11-26 21:18:43.619849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.532 "name": "Existed_Raid", 00:11:25.532 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:25.532 "strip_size_kb": 0, 00:11:25.532 "state": "configuring", 00:11:25.532 "raid_level": "raid1", 00:11:25.532 "superblock": true, 00:11:25.532 "num_base_bdevs": 4, 00:11:25.532 "num_base_bdevs_discovered": 1, 00:11:25.532 "num_base_bdevs_operational": 4, 00:11:25.532 "base_bdevs_list": [ 00:11:25.532 { 00:11:25.532 "name": "BaseBdev1", 00:11:25.532 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:25.532 "is_configured": true, 00:11:25.532 "data_offset": 2048, 00:11:25.532 "data_size": 63488 00:11:25.532 }, 00:11:25.532 { 00:11:25.532 "name": "BaseBdev2", 00:11:25.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.532 "is_configured": false, 00:11:25.532 "data_offset": 0, 00:11:25.532 "data_size": 0 00:11:25.532 }, 00:11:25.532 { 00:11:25.532 "name": "BaseBdev3", 00:11:25.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.532 "is_configured": false, 00:11:25.532 "data_offset": 0, 00:11:25.532 "data_size": 0 00:11:25.532 }, 00:11:25.532 { 00:11:25.532 "name": "BaseBdev4", 00:11:25.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.532 "is_configured": false, 00:11:25.532 "data_offset": 0, 00:11:25.532 "data_size": 0 00:11:25.532 } 00:11:25.532 ] 00:11:25.532 }' 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.532 21:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 [2024-11-26 21:18:44.118036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.102 BaseBdev2 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 [ 00:11:26.102 { 00:11:26.102 "name": "BaseBdev2", 00:11:26.102 "aliases": [ 00:11:26.102 "cc2b9928-45f1-49ab-b727-01bad931ada6" 00:11:26.102 ], 00:11:26.102 "product_name": "Malloc disk", 00:11:26.102 "block_size": 512, 00:11:26.102 "num_blocks": 65536, 00:11:26.102 "uuid": "cc2b9928-45f1-49ab-b727-01bad931ada6", 00:11:26.102 "assigned_rate_limits": { 00:11:26.102 "rw_ios_per_sec": 0, 00:11:26.102 "rw_mbytes_per_sec": 0, 00:11:26.102 "r_mbytes_per_sec": 0, 00:11:26.102 "w_mbytes_per_sec": 0 00:11:26.102 }, 00:11:26.102 "claimed": true, 00:11:26.102 "claim_type": "exclusive_write", 00:11:26.102 "zoned": false, 00:11:26.102 "supported_io_types": { 00:11:26.102 "read": true, 00:11:26.102 "write": true, 00:11:26.102 "unmap": true, 00:11:26.102 "flush": true, 00:11:26.102 "reset": true, 00:11:26.102 "nvme_admin": false, 00:11:26.102 "nvme_io": false, 00:11:26.102 "nvme_io_md": false, 00:11:26.102 "write_zeroes": true, 00:11:26.102 "zcopy": true, 00:11:26.102 "get_zone_info": false, 00:11:26.102 "zone_management": false, 00:11:26.102 "zone_append": false, 00:11:26.102 "compare": false, 00:11:26.102 "compare_and_write": false, 00:11:26.102 "abort": true, 00:11:26.102 "seek_hole": false, 00:11:26.102 "seek_data": false, 00:11:26.102 "copy": true, 00:11:26.102 "nvme_iov_md": false 00:11:26.102 }, 00:11:26.102 "memory_domains": [ 00:11:26.102 { 00:11:26.102 "dma_device_id": "system", 00:11:26.102 "dma_device_type": 1 00:11:26.102 }, 00:11:26.102 { 00:11:26.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.102 "dma_device_type": 2 00:11:26.102 } 00:11:26.102 ], 00:11:26.102 "driver_specific": {} 00:11:26.102 } 00:11:26.102 ] 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.102 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.103 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.103 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.103 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.103 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.103 "name": "Existed_Raid", 00:11:26.103 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:26.103 "strip_size_kb": 0, 00:11:26.103 "state": "configuring", 00:11:26.103 "raid_level": "raid1", 00:11:26.103 "superblock": true, 00:11:26.103 "num_base_bdevs": 4, 00:11:26.103 "num_base_bdevs_discovered": 2, 00:11:26.103 "num_base_bdevs_operational": 4, 00:11:26.103 "base_bdevs_list": [ 00:11:26.103 { 00:11:26.103 "name": "BaseBdev1", 00:11:26.103 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:26.103 "is_configured": true, 00:11:26.103 "data_offset": 2048, 00:11:26.103 "data_size": 63488 00:11:26.103 }, 00:11:26.103 { 00:11:26.103 "name": "BaseBdev2", 00:11:26.103 "uuid": "cc2b9928-45f1-49ab-b727-01bad931ada6", 00:11:26.103 "is_configured": true, 00:11:26.103 "data_offset": 2048, 00:11:26.103 "data_size": 63488 00:11:26.103 }, 00:11:26.103 { 00:11:26.103 "name": "BaseBdev3", 00:11:26.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.103 "is_configured": false, 00:11:26.103 "data_offset": 0, 00:11:26.103 "data_size": 0 00:11:26.103 }, 00:11:26.103 { 00:11:26.103 "name": "BaseBdev4", 00:11:26.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.103 "is_configured": false, 00:11:26.103 "data_offset": 0, 00:11:26.103 "data_size": 0 00:11:26.103 } 00:11:26.103 ] 00:11:26.103 }' 00:11:26.103 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.103 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.671 [2024-11-26 21:18:44.616753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.671 BaseBdev3 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.671 [ 00:11:26.671 { 00:11:26.671 "name": "BaseBdev3", 00:11:26.671 "aliases": [ 00:11:26.671 "a9e3278f-9ee0-44f0-be83-1e3e0d5f8043" 00:11:26.671 ], 00:11:26.671 "product_name": "Malloc disk", 00:11:26.671 "block_size": 512, 00:11:26.671 "num_blocks": 65536, 00:11:26.671 "uuid": "a9e3278f-9ee0-44f0-be83-1e3e0d5f8043", 00:11:26.671 "assigned_rate_limits": { 00:11:26.671 "rw_ios_per_sec": 0, 00:11:26.671 "rw_mbytes_per_sec": 0, 00:11:26.671 "r_mbytes_per_sec": 0, 00:11:26.671 "w_mbytes_per_sec": 0 00:11:26.671 }, 00:11:26.671 "claimed": true, 00:11:26.671 "claim_type": "exclusive_write", 00:11:26.671 "zoned": false, 00:11:26.671 "supported_io_types": { 00:11:26.671 "read": true, 00:11:26.671 "write": true, 00:11:26.671 "unmap": true, 00:11:26.671 "flush": true, 00:11:26.671 "reset": true, 00:11:26.671 "nvme_admin": false, 00:11:26.671 "nvme_io": false, 00:11:26.671 "nvme_io_md": false, 00:11:26.671 "write_zeroes": true, 00:11:26.671 "zcopy": true, 00:11:26.671 "get_zone_info": false, 00:11:26.671 "zone_management": false, 00:11:26.671 "zone_append": false, 00:11:26.671 "compare": false, 00:11:26.671 "compare_and_write": false, 00:11:26.671 "abort": true, 00:11:26.671 "seek_hole": false, 00:11:26.671 "seek_data": false, 00:11:26.671 "copy": true, 00:11:26.671 "nvme_iov_md": false 00:11:26.671 }, 00:11:26.671 "memory_domains": [ 00:11:26.671 { 00:11:26.671 "dma_device_id": "system", 00:11:26.671 "dma_device_type": 1 00:11:26.671 }, 00:11:26.671 { 00:11:26.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.671 "dma_device_type": 2 00:11:26.671 } 00:11:26.671 ], 00:11:26.671 "driver_specific": {} 00:11:26.671 } 00:11:26.671 ] 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.671 "name": "Existed_Raid", 00:11:26.671 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:26.671 "strip_size_kb": 0, 00:11:26.671 "state": "configuring", 00:11:26.671 "raid_level": "raid1", 00:11:26.671 "superblock": true, 00:11:26.671 "num_base_bdevs": 4, 00:11:26.671 "num_base_bdevs_discovered": 3, 00:11:26.671 "num_base_bdevs_operational": 4, 00:11:26.671 "base_bdevs_list": [ 00:11:26.671 { 00:11:26.671 "name": "BaseBdev1", 00:11:26.671 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:26.671 "is_configured": true, 00:11:26.671 "data_offset": 2048, 00:11:26.671 "data_size": 63488 00:11:26.671 }, 00:11:26.671 { 00:11:26.671 "name": "BaseBdev2", 00:11:26.671 "uuid": "cc2b9928-45f1-49ab-b727-01bad931ada6", 00:11:26.671 "is_configured": true, 00:11:26.671 "data_offset": 2048, 00:11:26.671 "data_size": 63488 00:11:26.671 }, 00:11:26.671 { 00:11:26.671 "name": "BaseBdev3", 00:11:26.671 "uuid": "a9e3278f-9ee0-44f0-be83-1e3e0d5f8043", 00:11:26.671 "is_configured": true, 00:11:26.671 "data_offset": 2048, 00:11:26.671 "data_size": 63488 00:11:26.671 }, 00:11:26.671 { 00:11:26.671 "name": "BaseBdev4", 00:11:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.671 "is_configured": false, 00:11:26.671 "data_offset": 0, 00:11:26.671 "data_size": 0 00:11:26.671 } 00:11:26.671 ] 00:11:26.671 }' 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.671 21:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.931 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.931 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.931 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 [2024-11-26 21:18:45.114720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.190 [2024-11-26 21:18:45.114996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:27.190 [2024-11-26 21:18:45.115047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.190 BaseBdev4 00:11:27.190 [2024-11-26 21:18:45.115353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.190 [2024-11-26 21:18:45.115524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:27.190 [2024-11-26 21:18:45.115540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:27.190 [2024-11-26 21:18:45.115707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.190 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 [ 00:11:27.190 { 00:11:27.190 "name": "BaseBdev4", 00:11:27.190 "aliases": [ 00:11:27.190 "e354168f-f9c6-4ce6-af3b-8785d9968547" 00:11:27.190 ], 00:11:27.190 "product_name": "Malloc disk", 00:11:27.190 "block_size": 512, 00:11:27.190 "num_blocks": 65536, 00:11:27.190 "uuid": "e354168f-f9c6-4ce6-af3b-8785d9968547", 00:11:27.190 "assigned_rate_limits": { 00:11:27.190 "rw_ios_per_sec": 0, 00:11:27.190 "rw_mbytes_per_sec": 0, 00:11:27.190 "r_mbytes_per_sec": 0, 00:11:27.190 "w_mbytes_per_sec": 0 00:11:27.190 }, 00:11:27.190 "claimed": true, 00:11:27.190 "claim_type": "exclusive_write", 00:11:27.190 "zoned": false, 00:11:27.190 "supported_io_types": { 00:11:27.190 "read": true, 00:11:27.190 "write": true, 00:11:27.190 "unmap": true, 00:11:27.190 "flush": true, 00:11:27.190 "reset": true, 00:11:27.190 "nvme_admin": false, 00:11:27.190 "nvme_io": false, 00:11:27.190 "nvme_io_md": false, 00:11:27.190 "write_zeroes": true, 00:11:27.190 "zcopy": true, 00:11:27.190 "get_zone_info": false, 00:11:27.190 "zone_management": false, 00:11:27.190 "zone_append": false, 00:11:27.190 "compare": false, 00:11:27.190 "compare_and_write": false, 00:11:27.190 "abort": true, 00:11:27.190 "seek_hole": false, 00:11:27.190 "seek_data": false, 00:11:27.190 "copy": true, 00:11:27.190 "nvme_iov_md": false 00:11:27.190 }, 00:11:27.190 "memory_domains": [ 00:11:27.190 { 00:11:27.190 "dma_device_id": "system", 00:11:27.190 "dma_device_type": 1 00:11:27.190 }, 00:11:27.190 { 00:11:27.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.190 "dma_device_type": 2 00:11:27.190 } 00:11:27.190 ], 00:11:27.190 "driver_specific": {} 00:11:27.190 } 00:11:27.190 ] 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.191 "name": "Existed_Raid", 00:11:27.191 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:27.191 "strip_size_kb": 0, 00:11:27.191 "state": "online", 00:11:27.191 "raid_level": "raid1", 00:11:27.191 "superblock": true, 00:11:27.191 "num_base_bdevs": 4, 00:11:27.191 "num_base_bdevs_discovered": 4, 00:11:27.191 "num_base_bdevs_operational": 4, 00:11:27.191 "base_bdevs_list": [ 00:11:27.191 { 00:11:27.191 "name": "BaseBdev1", 00:11:27.191 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:27.191 "is_configured": true, 00:11:27.191 "data_offset": 2048, 00:11:27.191 "data_size": 63488 00:11:27.191 }, 00:11:27.191 { 00:11:27.191 "name": "BaseBdev2", 00:11:27.191 "uuid": "cc2b9928-45f1-49ab-b727-01bad931ada6", 00:11:27.191 "is_configured": true, 00:11:27.191 "data_offset": 2048, 00:11:27.191 "data_size": 63488 00:11:27.191 }, 00:11:27.191 { 00:11:27.191 "name": "BaseBdev3", 00:11:27.191 "uuid": "a9e3278f-9ee0-44f0-be83-1e3e0d5f8043", 00:11:27.191 "is_configured": true, 00:11:27.191 "data_offset": 2048, 00:11:27.191 "data_size": 63488 00:11:27.191 }, 00:11:27.191 { 00:11:27.191 "name": "BaseBdev4", 00:11:27.191 "uuid": "e354168f-f9c6-4ce6-af3b-8785d9968547", 00:11:27.191 "is_configured": true, 00:11:27.191 "data_offset": 2048, 00:11:27.191 "data_size": 63488 00:11:27.191 } 00:11:27.191 ] 00:11:27.191 }' 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.191 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.451 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.451 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:27.451 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.451 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.451 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.451 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.712 [2024-11-26 21:18:45.610286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.712 "name": "Existed_Raid", 00:11:27.712 "aliases": [ 00:11:27.712 "9dc8e974-807d-4829-85f5-06ddeb17fd6e" 00:11:27.712 ], 00:11:27.712 "product_name": "Raid Volume", 00:11:27.712 "block_size": 512, 00:11:27.712 "num_blocks": 63488, 00:11:27.712 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:27.712 "assigned_rate_limits": { 00:11:27.712 "rw_ios_per_sec": 0, 00:11:27.712 "rw_mbytes_per_sec": 0, 00:11:27.712 "r_mbytes_per_sec": 0, 00:11:27.712 "w_mbytes_per_sec": 0 00:11:27.712 }, 00:11:27.712 "claimed": false, 00:11:27.712 "zoned": false, 00:11:27.712 "supported_io_types": { 00:11:27.712 "read": true, 00:11:27.712 "write": true, 00:11:27.712 "unmap": false, 00:11:27.712 "flush": false, 00:11:27.712 "reset": true, 00:11:27.712 "nvme_admin": false, 00:11:27.712 "nvme_io": false, 00:11:27.712 "nvme_io_md": false, 00:11:27.712 "write_zeroes": true, 00:11:27.712 "zcopy": false, 00:11:27.712 "get_zone_info": false, 00:11:27.712 "zone_management": false, 00:11:27.712 "zone_append": false, 00:11:27.712 "compare": false, 00:11:27.712 "compare_and_write": false, 00:11:27.712 "abort": false, 00:11:27.712 "seek_hole": false, 00:11:27.712 "seek_data": false, 00:11:27.712 "copy": false, 00:11:27.712 "nvme_iov_md": false 00:11:27.712 }, 00:11:27.712 "memory_domains": [ 00:11:27.712 { 00:11:27.712 "dma_device_id": "system", 00:11:27.712 "dma_device_type": 1 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.712 "dma_device_type": 2 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "system", 00:11:27.712 "dma_device_type": 1 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.712 "dma_device_type": 2 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "system", 00:11:27.712 "dma_device_type": 1 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.712 "dma_device_type": 2 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "system", 00:11:27.712 "dma_device_type": 1 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.712 "dma_device_type": 2 00:11:27.712 } 00:11:27.712 ], 00:11:27.712 "driver_specific": { 00:11:27.712 "raid": { 00:11:27.712 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:27.712 "strip_size_kb": 0, 00:11:27.712 "state": "online", 00:11:27.712 "raid_level": "raid1", 00:11:27.712 "superblock": true, 00:11:27.712 "num_base_bdevs": 4, 00:11:27.712 "num_base_bdevs_discovered": 4, 00:11:27.712 "num_base_bdevs_operational": 4, 00:11:27.712 "base_bdevs_list": [ 00:11:27.712 { 00:11:27.712 "name": "BaseBdev1", 00:11:27.712 "uuid": "9db3b24e-7748-4c75-8da2-562f7551ca5b", 00:11:27.712 "is_configured": true, 00:11:27.712 "data_offset": 2048, 00:11:27.712 "data_size": 63488 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "name": "BaseBdev2", 00:11:27.712 "uuid": "cc2b9928-45f1-49ab-b727-01bad931ada6", 00:11:27.712 "is_configured": true, 00:11:27.712 "data_offset": 2048, 00:11:27.712 "data_size": 63488 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "name": "BaseBdev3", 00:11:27.712 "uuid": "a9e3278f-9ee0-44f0-be83-1e3e0d5f8043", 00:11:27.712 "is_configured": true, 00:11:27.712 "data_offset": 2048, 00:11:27.712 "data_size": 63488 00:11:27.712 }, 00:11:27.712 { 00:11:27.712 "name": "BaseBdev4", 00:11:27.712 "uuid": "e354168f-f9c6-4ce6-af3b-8785d9968547", 00:11:27.712 "is_configured": true, 00:11:27.712 "data_offset": 2048, 00:11:27.712 "data_size": 63488 00:11:27.712 } 00:11:27.712 ] 00:11:27.712 } 00:11:27.712 } 00:11:27.712 }' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:27.712 BaseBdev2 00:11:27.712 BaseBdev3 00:11:27.712 BaseBdev4' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.712 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.713 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.972 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.972 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.972 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.973 21:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 [2024-11-26 21:18:45.921437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.973 "name": "Existed_Raid", 00:11:27.973 "uuid": "9dc8e974-807d-4829-85f5-06ddeb17fd6e", 00:11:27.973 "strip_size_kb": 0, 00:11:27.973 "state": "online", 00:11:27.973 "raid_level": "raid1", 00:11:27.973 "superblock": true, 00:11:27.973 "num_base_bdevs": 4, 00:11:27.973 "num_base_bdevs_discovered": 3, 00:11:27.973 "num_base_bdevs_operational": 3, 00:11:27.973 "base_bdevs_list": [ 00:11:27.973 { 00:11:27.973 "name": null, 00:11:27.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.973 "is_configured": false, 00:11:27.973 "data_offset": 0, 00:11:27.973 "data_size": 63488 00:11:27.973 }, 00:11:27.973 { 00:11:27.973 "name": "BaseBdev2", 00:11:27.973 "uuid": "cc2b9928-45f1-49ab-b727-01bad931ada6", 00:11:27.973 "is_configured": true, 00:11:27.973 "data_offset": 2048, 00:11:27.973 "data_size": 63488 00:11:27.973 }, 00:11:27.973 { 00:11:27.973 "name": "BaseBdev3", 00:11:27.973 "uuid": "a9e3278f-9ee0-44f0-be83-1e3e0d5f8043", 00:11:27.973 "is_configured": true, 00:11:27.973 "data_offset": 2048, 00:11:27.973 "data_size": 63488 00:11:27.973 }, 00:11:27.973 { 00:11:27.973 "name": "BaseBdev4", 00:11:27.973 "uuid": "e354168f-f9c6-4ce6-af3b-8785d9968547", 00:11:27.973 "is_configured": true, 00:11:27.973 "data_offset": 2048, 00:11:27.973 "data_size": 63488 00:11:27.973 } 00:11:27.973 ] 00:11:27.973 }' 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.973 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.544 [2024-11-26 21:18:46.516337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.544 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.544 [2024-11-26 21:18:46.671609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 [2024-11-26 21:18:46.822519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:28.804 [2024-11-26 21:18:46.822629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.804 [2024-11-26 21:18:46.917236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.804 [2024-11-26 21:18:46.917305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.804 [2024-11-26 21:18:46.917318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.065 21:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 BaseBdev2 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 [ 00:11:29.065 { 00:11:29.065 "name": "BaseBdev2", 00:11:29.065 "aliases": [ 00:11:29.065 "16059db5-2b92-4a0b-a79d-7446474181fd" 00:11:29.065 ], 00:11:29.065 "product_name": "Malloc disk", 00:11:29.065 "block_size": 512, 00:11:29.065 "num_blocks": 65536, 00:11:29.065 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:29.065 "assigned_rate_limits": { 00:11:29.065 "rw_ios_per_sec": 0, 00:11:29.065 "rw_mbytes_per_sec": 0, 00:11:29.065 "r_mbytes_per_sec": 0, 00:11:29.065 "w_mbytes_per_sec": 0 00:11:29.065 }, 00:11:29.065 "claimed": false, 00:11:29.065 "zoned": false, 00:11:29.065 "supported_io_types": { 00:11:29.065 "read": true, 00:11:29.065 "write": true, 00:11:29.065 "unmap": true, 00:11:29.065 "flush": true, 00:11:29.065 "reset": true, 00:11:29.065 "nvme_admin": false, 00:11:29.065 "nvme_io": false, 00:11:29.065 "nvme_io_md": false, 00:11:29.065 "write_zeroes": true, 00:11:29.065 "zcopy": true, 00:11:29.065 "get_zone_info": false, 00:11:29.065 "zone_management": false, 00:11:29.065 "zone_append": false, 00:11:29.065 "compare": false, 00:11:29.065 "compare_and_write": false, 00:11:29.065 "abort": true, 00:11:29.065 "seek_hole": false, 00:11:29.065 "seek_data": false, 00:11:29.065 "copy": true, 00:11:29.065 "nvme_iov_md": false 00:11:29.065 }, 00:11:29.065 "memory_domains": [ 00:11:29.065 { 00:11:29.065 "dma_device_id": "system", 00:11:29.065 "dma_device_type": 1 00:11:29.065 }, 00:11:29.065 { 00:11:29.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.065 "dma_device_type": 2 00:11:29.065 } 00:11:29.065 ], 00:11:29.065 "driver_specific": {} 00:11:29.065 } 00:11:29.065 ] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 BaseBdev3 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.065 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 [ 00:11:29.065 { 00:11:29.065 "name": "BaseBdev3", 00:11:29.065 "aliases": [ 00:11:29.065 "9664c0bb-8baa-4d17-be94-74acf3075a25" 00:11:29.065 ], 00:11:29.065 "product_name": "Malloc disk", 00:11:29.065 "block_size": 512, 00:11:29.065 "num_blocks": 65536, 00:11:29.065 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:29.065 "assigned_rate_limits": { 00:11:29.065 "rw_ios_per_sec": 0, 00:11:29.065 "rw_mbytes_per_sec": 0, 00:11:29.065 "r_mbytes_per_sec": 0, 00:11:29.065 "w_mbytes_per_sec": 0 00:11:29.065 }, 00:11:29.065 "claimed": false, 00:11:29.065 "zoned": false, 00:11:29.065 "supported_io_types": { 00:11:29.065 "read": true, 00:11:29.065 "write": true, 00:11:29.065 "unmap": true, 00:11:29.065 "flush": true, 00:11:29.065 "reset": true, 00:11:29.065 "nvme_admin": false, 00:11:29.065 "nvme_io": false, 00:11:29.065 "nvme_io_md": false, 00:11:29.065 "write_zeroes": true, 00:11:29.065 "zcopy": true, 00:11:29.065 "get_zone_info": false, 00:11:29.065 "zone_management": false, 00:11:29.065 "zone_append": false, 00:11:29.065 "compare": false, 00:11:29.065 "compare_and_write": false, 00:11:29.065 "abort": true, 00:11:29.065 "seek_hole": false, 00:11:29.065 "seek_data": false, 00:11:29.065 "copy": true, 00:11:29.065 "nvme_iov_md": false 00:11:29.065 }, 00:11:29.066 "memory_domains": [ 00:11:29.066 { 00:11:29.066 "dma_device_id": "system", 00:11:29.066 "dma_device_type": 1 00:11:29.066 }, 00:11:29.066 { 00:11:29.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.066 "dma_device_type": 2 00:11:29.066 } 00:11:29.066 ], 00:11:29.066 "driver_specific": {} 00:11:29.066 } 00:11:29.066 ] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.066 BaseBdev4 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.066 [ 00:11:29.066 { 00:11:29.066 "name": "BaseBdev4", 00:11:29.066 "aliases": [ 00:11:29.066 "44e45940-e92d-4b32-854a-b74996933402" 00:11:29.066 ], 00:11:29.066 "product_name": "Malloc disk", 00:11:29.066 "block_size": 512, 00:11:29.066 "num_blocks": 65536, 00:11:29.066 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:29.066 "assigned_rate_limits": { 00:11:29.066 "rw_ios_per_sec": 0, 00:11:29.066 "rw_mbytes_per_sec": 0, 00:11:29.066 "r_mbytes_per_sec": 0, 00:11:29.066 "w_mbytes_per_sec": 0 00:11:29.066 }, 00:11:29.066 "claimed": false, 00:11:29.066 "zoned": false, 00:11:29.066 "supported_io_types": { 00:11:29.066 "read": true, 00:11:29.066 "write": true, 00:11:29.066 "unmap": true, 00:11:29.066 "flush": true, 00:11:29.066 "reset": true, 00:11:29.066 "nvme_admin": false, 00:11:29.066 "nvme_io": false, 00:11:29.066 "nvme_io_md": false, 00:11:29.066 "write_zeroes": true, 00:11:29.066 "zcopy": true, 00:11:29.066 "get_zone_info": false, 00:11:29.066 "zone_management": false, 00:11:29.066 "zone_append": false, 00:11:29.066 "compare": false, 00:11:29.066 "compare_and_write": false, 00:11:29.066 "abort": true, 00:11:29.066 "seek_hole": false, 00:11:29.066 "seek_data": false, 00:11:29.066 "copy": true, 00:11:29.066 "nvme_iov_md": false 00:11:29.066 }, 00:11:29.066 "memory_domains": [ 00:11:29.066 { 00:11:29.066 "dma_device_id": "system", 00:11:29.066 "dma_device_type": 1 00:11:29.066 }, 00:11:29.066 { 00:11:29.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.066 "dma_device_type": 2 00:11:29.066 } 00:11:29.066 ], 00:11:29.066 "driver_specific": {} 00:11:29.066 } 00:11:29.066 ] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.066 [2024-11-26 21:18:47.169429] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.066 [2024-11-26 21:18:47.169478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.066 [2024-11-26 21:18:47.169498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.066 [2024-11-26 21:18:47.171363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.066 [2024-11-26 21:18:47.171415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.066 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.326 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.326 "name": "Existed_Raid", 00:11:29.326 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:29.326 "strip_size_kb": 0, 00:11:29.326 "state": "configuring", 00:11:29.326 "raid_level": "raid1", 00:11:29.326 "superblock": true, 00:11:29.326 "num_base_bdevs": 4, 00:11:29.326 "num_base_bdevs_discovered": 3, 00:11:29.326 "num_base_bdevs_operational": 4, 00:11:29.326 "base_bdevs_list": [ 00:11:29.326 { 00:11:29.326 "name": "BaseBdev1", 00:11:29.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.326 "is_configured": false, 00:11:29.326 "data_offset": 0, 00:11:29.326 "data_size": 0 00:11:29.326 }, 00:11:29.326 { 00:11:29.326 "name": "BaseBdev2", 00:11:29.326 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:29.326 "is_configured": true, 00:11:29.326 "data_offset": 2048, 00:11:29.326 "data_size": 63488 00:11:29.326 }, 00:11:29.326 { 00:11:29.326 "name": "BaseBdev3", 00:11:29.326 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:29.326 "is_configured": true, 00:11:29.326 "data_offset": 2048, 00:11:29.326 "data_size": 63488 00:11:29.326 }, 00:11:29.326 { 00:11:29.326 "name": "BaseBdev4", 00:11:29.326 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:29.326 "is_configured": true, 00:11:29.326 "data_offset": 2048, 00:11:29.326 "data_size": 63488 00:11:29.326 } 00:11:29.326 ] 00:11:29.326 }' 00:11:29.326 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.326 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.588 [2024-11-26 21:18:47.588791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.588 "name": "Existed_Raid", 00:11:29.588 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:29.588 "strip_size_kb": 0, 00:11:29.588 "state": "configuring", 00:11:29.588 "raid_level": "raid1", 00:11:29.588 "superblock": true, 00:11:29.588 "num_base_bdevs": 4, 00:11:29.588 "num_base_bdevs_discovered": 2, 00:11:29.588 "num_base_bdevs_operational": 4, 00:11:29.588 "base_bdevs_list": [ 00:11:29.588 { 00:11:29.588 "name": "BaseBdev1", 00:11:29.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.588 "is_configured": false, 00:11:29.588 "data_offset": 0, 00:11:29.588 "data_size": 0 00:11:29.588 }, 00:11:29.588 { 00:11:29.588 "name": null, 00:11:29.588 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:29.588 "is_configured": false, 00:11:29.588 "data_offset": 0, 00:11:29.588 "data_size": 63488 00:11:29.588 }, 00:11:29.588 { 00:11:29.588 "name": "BaseBdev3", 00:11:29.588 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:29.588 "is_configured": true, 00:11:29.588 "data_offset": 2048, 00:11:29.588 "data_size": 63488 00:11:29.588 }, 00:11:29.588 { 00:11:29.588 "name": "BaseBdev4", 00:11:29.588 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:29.588 "is_configured": true, 00:11:29.588 "data_offset": 2048, 00:11:29.588 "data_size": 63488 00:11:29.588 } 00:11:29.588 ] 00:11:29.588 }' 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.588 21:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.158 [2024-11-26 21:18:48.117290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.158 BaseBdev1 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.158 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.158 [ 00:11:30.158 { 00:11:30.158 "name": "BaseBdev1", 00:11:30.158 "aliases": [ 00:11:30.158 "335b076b-165f-4335-8772-481add36c7d5" 00:11:30.158 ], 00:11:30.158 "product_name": "Malloc disk", 00:11:30.158 "block_size": 512, 00:11:30.158 "num_blocks": 65536, 00:11:30.158 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:30.158 "assigned_rate_limits": { 00:11:30.158 "rw_ios_per_sec": 0, 00:11:30.158 "rw_mbytes_per_sec": 0, 00:11:30.158 "r_mbytes_per_sec": 0, 00:11:30.158 "w_mbytes_per_sec": 0 00:11:30.158 }, 00:11:30.158 "claimed": true, 00:11:30.158 "claim_type": "exclusive_write", 00:11:30.158 "zoned": false, 00:11:30.158 "supported_io_types": { 00:11:30.158 "read": true, 00:11:30.158 "write": true, 00:11:30.158 "unmap": true, 00:11:30.158 "flush": true, 00:11:30.158 "reset": true, 00:11:30.158 "nvme_admin": false, 00:11:30.158 "nvme_io": false, 00:11:30.158 "nvme_io_md": false, 00:11:30.158 "write_zeroes": true, 00:11:30.158 "zcopy": true, 00:11:30.158 "get_zone_info": false, 00:11:30.158 "zone_management": false, 00:11:30.158 "zone_append": false, 00:11:30.158 "compare": false, 00:11:30.158 "compare_and_write": false, 00:11:30.158 "abort": true, 00:11:30.158 "seek_hole": false, 00:11:30.158 "seek_data": false, 00:11:30.158 "copy": true, 00:11:30.158 "nvme_iov_md": false 00:11:30.158 }, 00:11:30.158 "memory_domains": [ 00:11:30.158 { 00:11:30.159 "dma_device_id": "system", 00:11:30.159 "dma_device_type": 1 00:11:30.159 }, 00:11:30.159 { 00:11:30.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.159 "dma_device_type": 2 00:11:30.159 } 00:11:30.159 ], 00:11:30.159 "driver_specific": {} 00:11:30.159 } 00:11:30.159 ] 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.159 "name": "Existed_Raid", 00:11:30.159 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:30.159 "strip_size_kb": 0, 00:11:30.159 "state": "configuring", 00:11:30.159 "raid_level": "raid1", 00:11:30.159 "superblock": true, 00:11:30.159 "num_base_bdevs": 4, 00:11:30.159 "num_base_bdevs_discovered": 3, 00:11:30.159 "num_base_bdevs_operational": 4, 00:11:30.159 "base_bdevs_list": [ 00:11:30.159 { 00:11:30.159 "name": "BaseBdev1", 00:11:30.159 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:30.159 "is_configured": true, 00:11:30.159 "data_offset": 2048, 00:11:30.159 "data_size": 63488 00:11:30.159 }, 00:11:30.159 { 00:11:30.159 "name": null, 00:11:30.159 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:30.159 "is_configured": false, 00:11:30.159 "data_offset": 0, 00:11:30.159 "data_size": 63488 00:11:30.159 }, 00:11:30.159 { 00:11:30.159 "name": "BaseBdev3", 00:11:30.159 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:30.159 "is_configured": true, 00:11:30.159 "data_offset": 2048, 00:11:30.159 "data_size": 63488 00:11:30.159 }, 00:11:30.159 { 00:11:30.159 "name": "BaseBdev4", 00:11:30.159 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:30.159 "is_configured": true, 00:11:30.159 "data_offset": 2048, 00:11:30.159 "data_size": 63488 00:11:30.159 } 00:11:30.159 ] 00:11:30.159 }' 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.159 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 [2024-11-26 21:18:48.648434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.728 "name": "Existed_Raid", 00:11:30.728 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:30.728 "strip_size_kb": 0, 00:11:30.728 "state": "configuring", 00:11:30.728 "raid_level": "raid1", 00:11:30.728 "superblock": true, 00:11:30.728 "num_base_bdevs": 4, 00:11:30.728 "num_base_bdevs_discovered": 2, 00:11:30.728 "num_base_bdevs_operational": 4, 00:11:30.728 "base_bdevs_list": [ 00:11:30.728 { 00:11:30.728 "name": "BaseBdev1", 00:11:30.728 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:30.728 "is_configured": true, 00:11:30.728 "data_offset": 2048, 00:11:30.728 "data_size": 63488 00:11:30.728 }, 00:11:30.728 { 00:11:30.728 "name": null, 00:11:30.728 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:30.728 "is_configured": false, 00:11:30.728 "data_offset": 0, 00:11:30.728 "data_size": 63488 00:11:30.728 }, 00:11:30.728 { 00:11:30.728 "name": null, 00:11:30.728 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:30.728 "is_configured": false, 00:11:30.728 "data_offset": 0, 00:11:30.728 "data_size": 63488 00:11:30.728 }, 00:11:30.728 { 00:11:30.728 "name": "BaseBdev4", 00:11:30.728 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:30.728 "is_configured": true, 00:11:30.728 "data_offset": 2048, 00:11:30.728 "data_size": 63488 00:11:30.728 } 00:11:30.728 ] 00:11:30.728 }' 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.728 21:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.988 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.246 [2024-11-26 21:18:49.147611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.246 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.246 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.247 "name": "Existed_Raid", 00:11:31.247 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:31.247 "strip_size_kb": 0, 00:11:31.247 "state": "configuring", 00:11:31.247 "raid_level": "raid1", 00:11:31.247 "superblock": true, 00:11:31.247 "num_base_bdevs": 4, 00:11:31.247 "num_base_bdevs_discovered": 3, 00:11:31.247 "num_base_bdevs_operational": 4, 00:11:31.247 "base_bdevs_list": [ 00:11:31.247 { 00:11:31.247 "name": "BaseBdev1", 00:11:31.247 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:31.247 "is_configured": true, 00:11:31.247 "data_offset": 2048, 00:11:31.247 "data_size": 63488 00:11:31.247 }, 00:11:31.247 { 00:11:31.247 "name": null, 00:11:31.247 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:31.247 "is_configured": false, 00:11:31.247 "data_offset": 0, 00:11:31.247 "data_size": 63488 00:11:31.247 }, 00:11:31.247 { 00:11:31.247 "name": "BaseBdev3", 00:11:31.247 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:31.247 "is_configured": true, 00:11:31.247 "data_offset": 2048, 00:11:31.247 "data_size": 63488 00:11:31.247 }, 00:11:31.247 { 00:11:31.247 "name": "BaseBdev4", 00:11:31.247 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:31.247 "is_configured": true, 00:11:31.247 "data_offset": 2048, 00:11:31.247 "data_size": 63488 00:11:31.247 } 00:11:31.247 ] 00:11:31.247 }' 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.247 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.505 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 [2024-11-26 21:18:49.654781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.766 "name": "Existed_Raid", 00:11:31.766 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:31.766 "strip_size_kb": 0, 00:11:31.766 "state": "configuring", 00:11:31.766 "raid_level": "raid1", 00:11:31.766 "superblock": true, 00:11:31.766 "num_base_bdevs": 4, 00:11:31.766 "num_base_bdevs_discovered": 2, 00:11:31.766 "num_base_bdevs_operational": 4, 00:11:31.766 "base_bdevs_list": [ 00:11:31.766 { 00:11:31.766 "name": null, 00:11:31.766 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:31.766 "is_configured": false, 00:11:31.766 "data_offset": 0, 00:11:31.766 "data_size": 63488 00:11:31.766 }, 00:11:31.766 { 00:11:31.766 "name": null, 00:11:31.766 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:31.766 "is_configured": false, 00:11:31.766 "data_offset": 0, 00:11:31.766 "data_size": 63488 00:11:31.766 }, 00:11:31.766 { 00:11:31.766 "name": "BaseBdev3", 00:11:31.766 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:31.766 "is_configured": true, 00:11:31.766 "data_offset": 2048, 00:11:31.766 "data_size": 63488 00:11:31.766 }, 00:11:31.766 { 00:11:31.766 "name": "BaseBdev4", 00:11:31.766 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:31.766 "is_configured": true, 00:11:31.766 "data_offset": 2048, 00:11:31.766 "data_size": 63488 00:11:31.766 } 00:11:31.766 ] 00:11:31.766 }' 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.766 21:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 [2024-11-26 21:18:50.272160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.338 "name": "Existed_Raid", 00:11:32.338 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:32.338 "strip_size_kb": 0, 00:11:32.338 "state": "configuring", 00:11:32.338 "raid_level": "raid1", 00:11:32.338 "superblock": true, 00:11:32.338 "num_base_bdevs": 4, 00:11:32.338 "num_base_bdevs_discovered": 3, 00:11:32.338 "num_base_bdevs_operational": 4, 00:11:32.338 "base_bdevs_list": [ 00:11:32.338 { 00:11:32.338 "name": null, 00:11:32.338 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:32.338 "is_configured": false, 00:11:32.338 "data_offset": 0, 00:11:32.338 "data_size": 63488 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "name": "BaseBdev2", 00:11:32.338 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:32.338 "is_configured": true, 00:11:32.338 "data_offset": 2048, 00:11:32.338 "data_size": 63488 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "name": "BaseBdev3", 00:11:32.338 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:32.338 "is_configured": true, 00:11:32.338 "data_offset": 2048, 00:11:32.338 "data_size": 63488 00:11:32.338 }, 00:11:32.338 { 00:11:32.338 "name": "BaseBdev4", 00:11:32.338 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:32.338 "is_configured": true, 00:11:32.338 "data_offset": 2048, 00:11:32.338 "data_size": 63488 00:11:32.338 } 00:11:32.338 ] 00:11:32.338 }' 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.338 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 335b076b-165f-4335-8772-481add36c7d5 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.856 [2024-11-26 21:18:50.821033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.856 [2024-11-26 21:18:50.821299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.856 [2024-11-26 21:18:50.821316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.856 NewBaseBdev 00:11:32.856 [2024-11-26 21:18:50.821604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:32.856 [2024-11-26 21:18:50.821775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.856 [2024-11-26 21:18:50.821787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.856 [2024-11-26 21:18:50.821935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.856 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.856 [ 00:11:32.856 { 00:11:32.856 "name": "NewBaseBdev", 00:11:32.856 "aliases": [ 00:11:32.856 "335b076b-165f-4335-8772-481add36c7d5" 00:11:32.856 ], 00:11:32.856 "product_name": "Malloc disk", 00:11:32.857 "block_size": 512, 00:11:32.857 "num_blocks": 65536, 00:11:32.857 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:32.857 "assigned_rate_limits": { 00:11:32.857 "rw_ios_per_sec": 0, 00:11:32.857 "rw_mbytes_per_sec": 0, 00:11:32.857 "r_mbytes_per_sec": 0, 00:11:32.857 "w_mbytes_per_sec": 0 00:11:32.857 }, 00:11:32.857 "claimed": true, 00:11:32.857 "claim_type": "exclusive_write", 00:11:32.857 "zoned": false, 00:11:32.857 "supported_io_types": { 00:11:32.857 "read": true, 00:11:32.857 "write": true, 00:11:32.857 "unmap": true, 00:11:32.857 "flush": true, 00:11:32.857 "reset": true, 00:11:32.857 "nvme_admin": false, 00:11:32.857 "nvme_io": false, 00:11:32.857 "nvme_io_md": false, 00:11:32.857 "write_zeroes": true, 00:11:32.857 "zcopy": true, 00:11:32.857 "get_zone_info": false, 00:11:32.857 "zone_management": false, 00:11:32.857 "zone_append": false, 00:11:32.857 "compare": false, 00:11:32.857 "compare_and_write": false, 00:11:32.857 "abort": true, 00:11:32.857 "seek_hole": false, 00:11:32.857 "seek_data": false, 00:11:32.857 "copy": true, 00:11:32.857 "nvme_iov_md": false 00:11:32.857 }, 00:11:32.857 "memory_domains": [ 00:11:32.857 { 00:11:32.857 "dma_device_id": "system", 00:11:32.857 "dma_device_type": 1 00:11:32.857 }, 00:11:32.857 { 00:11:32.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.857 "dma_device_type": 2 00:11:32.857 } 00:11:32.857 ], 00:11:32.857 "driver_specific": {} 00:11:32.857 } 00:11:32.857 ] 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.857 "name": "Existed_Raid", 00:11:32.857 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:32.857 "strip_size_kb": 0, 00:11:32.857 "state": "online", 00:11:32.857 "raid_level": "raid1", 00:11:32.857 "superblock": true, 00:11:32.857 "num_base_bdevs": 4, 00:11:32.857 "num_base_bdevs_discovered": 4, 00:11:32.857 "num_base_bdevs_operational": 4, 00:11:32.857 "base_bdevs_list": [ 00:11:32.857 { 00:11:32.857 "name": "NewBaseBdev", 00:11:32.857 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:32.857 "is_configured": true, 00:11:32.857 "data_offset": 2048, 00:11:32.857 "data_size": 63488 00:11:32.857 }, 00:11:32.857 { 00:11:32.857 "name": "BaseBdev2", 00:11:32.857 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:32.857 "is_configured": true, 00:11:32.857 "data_offset": 2048, 00:11:32.857 "data_size": 63488 00:11:32.857 }, 00:11:32.857 { 00:11:32.857 "name": "BaseBdev3", 00:11:32.857 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:32.857 "is_configured": true, 00:11:32.857 "data_offset": 2048, 00:11:32.857 "data_size": 63488 00:11:32.857 }, 00:11:32.857 { 00:11:32.857 "name": "BaseBdev4", 00:11:32.857 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:32.857 "is_configured": true, 00:11:32.857 "data_offset": 2048, 00:11:32.857 "data_size": 63488 00:11:32.857 } 00:11:32.857 ] 00:11:32.857 }' 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.857 21:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.430 [2024-11-26 21:18:51.300588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.430 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.430 "name": "Existed_Raid", 00:11:33.430 "aliases": [ 00:11:33.430 "159cfb88-2af8-45c3-bf3a-b77d20baa809" 00:11:33.430 ], 00:11:33.430 "product_name": "Raid Volume", 00:11:33.430 "block_size": 512, 00:11:33.430 "num_blocks": 63488, 00:11:33.430 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:33.430 "assigned_rate_limits": { 00:11:33.430 "rw_ios_per_sec": 0, 00:11:33.430 "rw_mbytes_per_sec": 0, 00:11:33.430 "r_mbytes_per_sec": 0, 00:11:33.430 "w_mbytes_per_sec": 0 00:11:33.430 }, 00:11:33.430 "claimed": false, 00:11:33.430 "zoned": false, 00:11:33.430 "supported_io_types": { 00:11:33.430 "read": true, 00:11:33.430 "write": true, 00:11:33.430 "unmap": false, 00:11:33.430 "flush": false, 00:11:33.430 "reset": true, 00:11:33.430 "nvme_admin": false, 00:11:33.430 "nvme_io": false, 00:11:33.430 "nvme_io_md": false, 00:11:33.430 "write_zeroes": true, 00:11:33.430 "zcopy": false, 00:11:33.430 "get_zone_info": false, 00:11:33.430 "zone_management": false, 00:11:33.430 "zone_append": false, 00:11:33.430 "compare": false, 00:11:33.430 "compare_and_write": false, 00:11:33.430 "abort": false, 00:11:33.430 "seek_hole": false, 00:11:33.430 "seek_data": false, 00:11:33.430 "copy": false, 00:11:33.430 "nvme_iov_md": false 00:11:33.430 }, 00:11:33.430 "memory_domains": [ 00:11:33.430 { 00:11:33.430 "dma_device_id": "system", 00:11:33.430 "dma_device_type": 1 00:11:33.430 }, 00:11:33.430 { 00:11:33.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.430 "dma_device_type": 2 00:11:33.430 }, 00:11:33.430 { 00:11:33.430 "dma_device_id": "system", 00:11:33.430 "dma_device_type": 1 00:11:33.430 }, 00:11:33.430 { 00:11:33.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.430 "dma_device_type": 2 00:11:33.430 }, 00:11:33.430 { 00:11:33.430 "dma_device_id": "system", 00:11:33.430 "dma_device_type": 1 00:11:33.430 }, 00:11:33.431 { 00:11:33.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.431 "dma_device_type": 2 00:11:33.431 }, 00:11:33.431 { 00:11:33.431 "dma_device_id": "system", 00:11:33.431 "dma_device_type": 1 00:11:33.431 }, 00:11:33.431 { 00:11:33.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.431 "dma_device_type": 2 00:11:33.431 } 00:11:33.431 ], 00:11:33.431 "driver_specific": { 00:11:33.431 "raid": { 00:11:33.431 "uuid": "159cfb88-2af8-45c3-bf3a-b77d20baa809", 00:11:33.431 "strip_size_kb": 0, 00:11:33.431 "state": "online", 00:11:33.431 "raid_level": "raid1", 00:11:33.431 "superblock": true, 00:11:33.431 "num_base_bdevs": 4, 00:11:33.431 "num_base_bdevs_discovered": 4, 00:11:33.431 "num_base_bdevs_operational": 4, 00:11:33.431 "base_bdevs_list": [ 00:11:33.431 { 00:11:33.431 "name": "NewBaseBdev", 00:11:33.431 "uuid": "335b076b-165f-4335-8772-481add36c7d5", 00:11:33.431 "is_configured": true, 00:11:33.431 "data_offset": 2048, 00:11:33.431 "data_size": 63488 00:11:33.431 }, 00:11:33.431 { 00:11:33.431 "name": "BaseBdev2", 00:11:33.431 "uuid": "16059db5-2b92-4a0b-a79d-7446474181fd", 00:11:33.431 "is_configured": true, 00:11:33.431 "data_offset": 2048, 00:11:33.431 "data_size": 63488 00:11:33.431 }, 00:11:33.431 { 00:11:33.431 "name": "BaseBdev3", 00:11:33.431 "uuid": "9664c0bb-8baa-4d17-be94-74acf3075a25", 00:11:33.431 "is_configured": true, 00:11:33.431 "data_offset": 2048, 00:11:33.431 "data_size": 63488 00:11:33.431 }, 00:11:33.431 { 00:11:33.431 "name": "BaseBdev4", 00:11:33.431 "uuid": "44e45940-e92d-4b32-854a-b74996933402", 00:11:33.431 "is_configured": true, 00:11:33.431 "data_offset": 2048, 00:11:33.431 "data_size": 63488 00:11:33.431 } 00:11:33.431 ] 00:11:33.431 } 00:11:33.431 } 00:11:33.431 }' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:33.431 BaseBdev2 00:11:33.431 BaseBdev3 00:11:33.431 BaseBdev4' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.431 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.707 [2024-11-26 21:18:51.639650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.707 [2024-11-26 21:18:51.639681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.707 [2024-11-26 21:18:51.639754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.707 [2024-11-26 21:18:51.640076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.707 [2024-11-26 21:18:51.640097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73655 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73655 ']' 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73655 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73655 00:11:33.707 killing process with pid 73655 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73655' 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73655 00:11:33.707 [2024-11-26 21:18:51.687595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.707 21:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73655 00:11:33.980 [2024-11-26 21:18:52.055652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.359 21:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:35.359 00:11:35.359 real 0m11.364s 00:11:35.359 user 0m18.119s 00:11:35.359 sys 0m2.017s 00:11:35.359 21:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.359 21:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.359 ************************************ 00:11:35.359 END TEST raid_state_function_test_sb 00:11:35.359 ************************************ 00:11:35.359 21:18:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:35.359 21:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.359 21:18:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.359 21:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.359 ************************************ 00:11:35.359 START TEST raid_superblock_test 00:11:35.359 ************************************ 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74325 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74325 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74325 ']' 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.359 21:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.359 [2024-11-26 21:18:53.288126] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:35.359 [2024-11-26 21:18:53.288252] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74325 ] 00:11:35.359 [2024-11-26 21:18:53.461336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.618 [2024-11-26 21:18:53.572713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.877 [2024-11-26 21:18:53.775309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.877 [2024-11-26 21:18:53.775374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 malloc1 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 [2024-11-26 21:18:54.144018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.138 [2024-11-26 21:18:54.144069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.138 [2024-11-26 21:18:54.144089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:36.138 [2024-11-26 21:18:54.144098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.138 [2024-11-26 21:18:54.146116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.138 [2024-11-26 21:18:54.146147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.138 pt1 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 malloc2 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 [2024-11-26 21:18:54.199256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.138 [2024-11-26 21:18:54.199306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.138 [2024-11-26 21:18:54.199329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:36.138 [2024-11-26 21:18:54.199338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.138 [2024-11-26 21:18:54.201312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.138 [2024-11-26 21:18:54.201343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.138 pt2 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 malloc3 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 [2024-11-26 21:18:54.267566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.138 [2024-11-26 21:18:54.267615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.138 [2024-11-26 21:18:54.267634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:36.138 [2024-11-26 21:18:54.267642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.138 [2024-11-26 21:18:54.269682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.138 [2024-11-26 21:18:54.269714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.138 pt3 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.138 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.398 malloc4 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.398 [2024-11-26 21:18:54.321986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:36.398 [2024-11-26 21:18:54.322033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.398 [2024-11-26 21:18:54.322052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:36.398 [2024-11-26 21:18:54.322060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.398 [2024-11-26 21:18:54.324239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.398 [2024-11-26 21:18:54.324271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:36.398 pt4 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.398 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.398 [2024-11-26 21:18:54.334002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.398 [2024-11-26 21:18:54.335741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.398 [2024-11-26 21:18:54.335805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.398 [2024-11-26 21:18:54.335865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:36.398 [2024-11-26 21:18:54.336090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:36.398 [2024-11-26 21:18:54.336118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.399 [2024-11-26 21:18:54.336364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.399 [2024-11-26 21:18:54.336538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:36.399 [2024-11-26 21:18:54.336558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:36.399 [2024-11-26 21:18:54.336697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.399 "name": "raid_bdev1", 00:11:36.399 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:36.399 "strip_size_kb": 0, 00:11:36.399 "state": "online", 00:11:36.399 "raid_level": "raid1", 00:11:36.399 "superblock": true, 00:11:36.399 "num_base_bdevs": 4, 00:11:36.399 "num_base_bdevs_discovered": 4, 00:11:36.399 "num_base_bdevs_operational": 4, 00:11:36.399 "base_bdevs_list": [ 00:11:36.399 { 00:11:36.399 "name": "pt1", 00:11:36.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.399 "is_configured": true, 00:11:36.399 "data_offset": 2048, 00:11:36.399 "data_size": 63488 00:11:36.399 }, 00:11:36.399 { 00:11:36.399 "name": "pt2", 00:11:36.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.399 "is_configured": true, 00:11:36.399 "data_offset": 2048, 00:11:36.399 "data_size": 63488 00:11:36.399 }, 00:11:36.399 { 00:11:36.399 "name": "pt3", 00:11:36.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.399 "is_configured": true, 00:11:36.399 "data_offset": 2048, 00:11:36.399 "data_size": 63488 00:11:36.399 }, 00:11:36.399 { 00:11:36.399 "name": "pt4", 00:11:36.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.399 "is_configured": true, 00:11:36.399 "data_offset": 2048, 00:11:36.399 "data_size": 63488 00:11:36.399 } 00:11:36.399 ] 00:11:36.399 }' 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.399 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.658 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.659 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.659 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.659 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.659 [2024-11-26 21:18:54.765557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.659 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.659 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.659 "name": "raid_bdev1", 00:11:36.659 "aliases": [ 00:11:36.659 "c7348e29-7fd0-4610-84cc-d5af02a39048" 00:11:36.659 ], 00:11:36.659 "product_name": "Raid Volume", 00:11:36.659 "block_size": 512, 00:11:36.659 "num_blocks": 63488, 00:11:36.659 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:36.659 "assigned_rate_limits": { 00:11:36.659 "rw_ios_per_sec": 0, 00:11:36.659 "rw_mbytes_per_sec": 0, 00:11:36.659 "r_mbytes_per_sec": 0, 00:11:36.659 "w_mbytes_per_sec": 0 00:11:36.659 }, 00:11:36.659 "claimed": false, 00:11:36.659 "zoned": false, 00:11:36.659 "supported_io_types": { 00:11:36.659 "read": true, 00:11:36.659 "write": true, 00:11:36.659 "unmap": false, 00:11:36.659 "flush": false, 00:11:36.659 "reset": true, 00:11:36.659 "nvme_admin": false, 00:11:36.659 "nvme_io": false, 00:11:36.659 "nvme_io_md": false, 00:11:36.659 "write_zeroes": true, 00:11:36.659 "zcopy": false, 00:11:36.659 "get_zone_info": false, 00:11:36.659 "zone_management": false, 00:11:36.659 "zone_append": false, 00:11:36.659 "compare": false, 00:11:36.659 "compare_and_write": false, 00:11:36.659 "abort": false, 00:11:36.659 "seek_hole": false, 00:11:36.659 "seek_data": false, 00:11:36.659 "copy": false, 00:11:36.659 "nvme_iov_md": false 00:11:36.659 }, 00:11:36.659 "memory_domains": [ 00:11:36.659 { 00:11:36.659 "dma_device_id": "system", 00:11:36.659 "dma_device_type": 1 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.659 "dma_device_type": 2 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "system", 00:11:36.659 "dma_device_type": 1 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.659 "dma_device_type": 2 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "system", 00:11:36.659 "dma_device_type": 1 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.659 "dma_device_type": 2 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "system", 00:11:36.659 "dma_device_type": 1 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.659 "dma_device_type": 2 00:11:36.659 } 00:11:36.659 ], 00:11:36.659 "driver_specific": { 00:11:36.659 "raid": { 00:11:36.659 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:36.659 "strip_size_kb": 0, 00:11:36.659 "state": "online", 00:11:36.659 "raid_level": "raid1", 00:11:36.659 "superblock": true, 00:11:36.659 "num_base_bdevs": 4, 00:11:36.659 "num_base_bdevs_discovered": 4, 00:11:36.659 "num_base_bdevs_operational": 4, 00:11:36.659 "base_bdevs_list": [ 00:11:36.659 { 00:11:36.659 "name": "pt1", 00:11:36.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.659 "is_configured": true, 00:11:36.659 "data_offset": 2048, 00:11:36.659 "data_size": 63488 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "name": "pt2", 00:11:36.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.659 "is_configured": true, 00:11:36.659 "data_offset": 2048, 00:11:36.659 "data_size": 63488 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "name": "pt3", 00:11:36.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.659 "is_configured": true, 00:11:36.659 "data_offset": 2048, 00:11:36.659 "data_size": 63488 00:11:36.659 }, 00:11:36.659 { 00:11:36.659 "name": "pt4", 00:11:36.659 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.659 "is_configured": true, 00:11:36.659 "data_offset": 2048, 00:11:36.659 "data_size": 63488 00:11:36.659 } 00:11:36.659 ] 00:11:36.659 } 00:11:36.659 } 00:11:36.659 }' 00:11:36.659 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.918 pt2 00:11:36.918 pt3 00:11:36.918 pt4' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.918 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.919 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.919 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.919 21:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.919 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.919 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.919 21:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.919 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 [2024-11-26 21:18:55.073022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c7348e29-7fd0-4610-84cc-d5af02a39048 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c7348e29-7fd0-4610-84cc-d5af02a39048 ']' 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 [2024-11-26 21:18:55.124627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.178 [2024-11-26 21:18:55.124658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.178 [2024-11-26 21:18:55.124765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.178 [2024-11-26 21:18:55.124851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.178 [2024-11-26 21:18:55.124866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.178 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.178 [2024-11-26 21:18:55.284318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:37.178 [2024-11-26 21:18:55.286171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:37.178 [2024-11-26 21:18:55.286221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:37.178 [2024-11-26 21:18:55.286255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:37.178 [2024-11-26 21:18:55.286300] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:37.178 [2024-11-26 21:18:55.286342] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:37.178 [2024-11-26 21:18:55.286360] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:37.178 [2024-11-26 21:18:55.286377] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:37.179 [2024-11-26 21:18:55.286389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.179 [2024-11-26 21:18:55.286401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:37.179 request: 00:11:37.179 { 00:11:37.179 "name": "raid_bdev1", 00:11:37.179 "raid_level": "raid1", 00:11:37.179 "base_bdevs": [ 00:11:37.179 "malloc1", 00:11:37.179 "malloc2", 00:11:37.179 "malloc3", 00:11:37.179 "malloc4" 00:11:37.179 ], 00:11:37.179 "superblock": false, 00:11:37.179 "method": "bdev_raid_create", 00:11:37.179 "req_id": 1 00:11:37.179 } 00:11:37.179 Got JSON-RPC error response 00:11:37.179 response: 00:11:37.179 { 00:11:37.179 "code": -17, 00:11:37.179 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:37.179 } 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:37.179 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.438 [2024-11-26 21:18:55.340214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:37.438 [2024-11-26 21:18:55.340267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.438 [2024-11-26 21:18:55.340283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:37.438 [2024-11-26 21:18:55.340293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.438 [2024-11-26 21:18:55.342403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.438 [2024-11-26 21:18:55.342442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:37.438 [2024-11-26 21:18:55.342519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:37.438 [2024-11-26 21:18:55.342580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:37.438 pt1 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.438 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.438 "name": "raid_bdev1", 00:11:37.438 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:37.438 "strip_size_kb": 0, 00:11:37.438 "state": "configuring", 00:11:37.438 "raid_level": "raid1", 00:11:37.438 "superblock": true, 00:11:37.438 "num_base_bdevs": 4, 00:11:37.438 "num_base_bdevs_discovered": 1, 00:11:37.438 "num_base_bdevs_operational": 4, 00:11:37.438 "base_bdevs_list": [ 00:11:37.438 { 00:11:37.438 "name": "pt1", 00:11:37.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.438 "is_configured": true, 00:11:37.438 "data_offset": 2048, 00:11:37.438 "data_size": 63488 00:11:37.438 }, 00:11:37.438 { 00:11:37.438 "name": null, 00:11:37.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.438 "is_configured": false, 00:11:37.438 "data_offset": 2048, 00:11:37.438 "data_size": 63488 00:11:37.438 }, 00:11:37.438 { 00:11:37.438 "name": null, 00:11:37.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.439 "is_configured": false, 00:11:37.439 "data_offset": 2048, 00:11:37.439 "data_size": 63488 00:11:37.439 }, 00:11:37.439 { 00:11:37.439 "name": null, 00:11:37.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.439 "is_configured": false, 00:11:37.439 "data_offset": 2048, 00:11:37.439 "data_size": 63488 00:11:37.439 } 00:11:37.439 ] 00:11:37.439 }' 00:11:37.439 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.439 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 [2024-11-26 21:18:55.811878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.698 [2024-11-26 21:18:55.812001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.698 [2024-11-26 21:18:55.812030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:37.698 [2024-11-26 21:18:55.812043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.698 [2024-11-26 21:18:55.812593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.698 [2024-11-26 21:18:55.812623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.698 [2024-11-26 21:18:55.812731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.698 [2024-11-26 21:18:55.812762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.698 pt2 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 [2024-11-26 21:18:55.823806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.958 "name": "raid_bdev1", 00:11:37.958 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:37.958 "strip_size_kb": 0, 00:11:37.958 "state": "configuring", 00:11:37.958 "raid_level": "raid1", 00:11:37.958 "superblock": true, 00:11:37.958 "num_base_bdevs": 4, 00:11:37.958 "num_base_bdevs_discovered": 1, 00:11:37.958 "num_base_bdevs_operational": 4, 00:11:37.958 "base_bdevs_list": [ 00:11:37.958 { 00:11:37.958 "name": "pt1", 00:11:37.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.958 "is_configured": true, 00:11:37.958 "data_offset": 2048, 00:11:37.958 "data_size": 63488 00:11:37.958 }, 00:11:37.958 { 00:11:37.958 "name": null, 00:11:37.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.958 "is_configured": false, 00:11:37.958 "data_offset": 0, 00:11:37.958 "data_size": 63488 00:11:37.958 }, 00:11:37.958 { 00:11:37.958 "name": null, 00:11:37.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.958 "is_configured": false, 00:11:37.958 "data_offset": 2048, 00:11:37.958 "data_size": 63488 00:11:37.958 }, 00:11:37.958 { 00:11:37.958 "name": null, 00:11:37.958 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.958 "is_configured": false, 00:11:37.958 "data_offset": 2048, 00:11:37.958 "data_size": 63488 00:11:37.958 } 00:11:37.958 ] 00:11:37.958 }' 00:11:37.958 21:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.958 21:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.218 [2024-11-26 21:18:56.283088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.218 [2024-11-26 21:18:56.283170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.218 [2024-11-26 21:18:56.283194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:38.218 [2024-11-26 21:18:56.283204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.218 [2024-11-26 21:18:56.283764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.218 [2024-11-26 21:18:56.283785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.218 [2024-11-26 21:18:56.283887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:38.218 [2024-11-26 21:18:56.283912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.218 pt2 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.218 [2024-11-26 21:18:56.295016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:38.218 [2024-11-26 21:18:56.295071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.218 [2024-11-26 21:18:56.295093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:38.218 [2024-11-26 21:18:56.295101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.218 [2024-11-26 21:18:56.295539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.218 [2024-11-26 21:18:56.295574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:38.218 [2024-11-26 21:18:56.295647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:38.218 [2024-11-26 21:18:56.295667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.218 pt3 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.218 [2024-11-26 21:18:56.306939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:38.218 [2024-11-26 21:18:56.306994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.218 [2024-11-26 21:18:56.307012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:38.218 [2024-11-26 21:18:56.307020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.218 [2024-11-26 21:18:56.307420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.218 [2024-11-26 21:18:56.307437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:38.218 [2024-11-26 21:18:56.307502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:38.218 [2024-11-26 21:18:56.307527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:38.218 [2024-11-26 21:18:56.307677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.218 [2024-11-26 21:18:56.307686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.218 [2024-11-26 21:18:56.307953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:38.218 [2024-11-26 21:18:56.308186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.218 [2024-11-26 21:18:56.308212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:38.218 [2024-11-26 21:18:56.308380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.218 pt4 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.218 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.477 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.477 "name": "raid_bdev1", 00:11:38.477 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:38.477 "strip_size_kb": 0, 00:11:38.477 "state": "online", 00:11:38.477 "raid_level": "raid1", 00:11:38.477 "superblock": true, 00:11:38.477 "num_base_bdevs": 4, 00:11:38.477 "num_base_bdevs_discovered": 4, 00:11:38.477 "num_base_bdevs_operational": 4, 00:11:38.477 "base_bdevs_list": [ 00:11:38.477 { 00:11:38.478 "name": "pt1", 00:11:38.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.478 "is_configured": true, 00:11:38.478 "data_offset": 2048, 00:11:38.478 "data_size": 63488 00:11:38.478 }, 00:11:38.478 { 00:11:38.478 "name": "pt2", 00:11:38.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.478 "is_configured": true, 00:11:38.478 "data_offset": 2048, 00:11:38.478 "data_size": 63488 00:11:38.478 }, 00:11:38.478 { 00:11:38.478 "name": "pt3", 00:11:38.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.478 "is_configured": true, 00:11:38.478 "data_offset": 2048, 00:11:38.478 "data_size": 63488 00:11:38.478 }, 00:11:38.478 { 00:11:38.478 "name": "pt4", 00:11:38.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.478 "is_configured": true, 00:11:38.478 "data_offset": 2048, 00:11:38.478 "data_size": 63488 00:11:38.478 } 00:11:38.478 ] 00:11:38.478 }' 00:11:38.478 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.478 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.737 [2024-11-26 21:18:56.818511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.737 "name": "raid_bdev1", 00:11:38.737 "aliases": [ 00:11:38.737 "c7348e29-7fd0-4610-84cc-d5af02a39048" 00:11:38.737 ], 00:11:38.737 "product_name": "Raid Volume", 00:11:38.737 "block_size": 512, 00:11:38.737 "num_blocks": 63488, 00:11:38.737 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:38.737 "assigned_rate_limits": { 00:11:38.737 "rw_ios_per_sec": 0, 00:11:38.737 "rw_mbytes_per_sec": 0, 00:11:38.737 "r_mbytes_per_sec": 0, 00:11:38.737 "w_mbytes_per_sec": 0 00:11:38.737 }, 00:11:38.737 "claimed": false, 00:11:38.737 "zoned": false, 00:11:38.737 "supported_io_types": { 00:11:38.737 "read": true, 00:11:38.737 "write": true, 00:11:38.737 "unmap": false, 00:11:38.737 "flush": false, 00:11:38.737 "reset": true, 00:11:38.737 "nvme_admin": false, 00:11:38.737 "nvme_io": false, 00:11:38.737 "nvme_io_md": false, 00:11:38.737 "write_zeroes": true, 00:11:38.737 "zcopy": false, 00:11:38.737 "get_zone_info": false, 00:11:38.737 "zone_management": false, 00:11:38.737 "zone_append": false, 00:11:38.737 "compare": false, 00:11:38.737 "compare_and_write": false, 00:11:38.737 "abort": false, 00:11:38.737 "seek_hole": false, 00:11:38.737 "seek_data": false, 00:11:38.737 "copy": false, 00:11:38.737 "nvme_iov_md": false 00:11:38.737 }, 00:11:38.737 "memory_domains": [ 00:11:38.737 { 00:11:38.737 "dma_device_id": "system", 00:11:38.737 "dma_device_type": 1 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.737 "dma_device_type": 2 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "system", 00:11:38.737 "dma_device_type": 1 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.737 "dma_device_type": 2 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "system", 00:11:38.737 "dma_device_type": 1 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.737 "dma_device_type": 2 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "system", 00:11:38.737 "dma_device_type": 1 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.737 "dma_device_type": 2 00:11:38.737 } 00:11:38.737 ], 00:11:38.737 "driver_specific": { 00:11:38.737 "raid": { 00:11:38.737 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:38.737 "strip_size_kb": 0, 00:11:38.737 "state": "online", 00:11:38.737 "raid_level": "raid1", 00:11:38.737 "superblock": true, 00:11:38.737 "num_base_bdevs": 4, 00:11:38.737 "num_base_bdevs_discovered": 4, 00:11:38.737 "num_base_bdevs_operational": 4, 00:11:38.737 "base_bdevs_list": [ 00:11:38.737 { 00:11:38.737 "name": "pt1", 00:11:38.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.737 "is_configured": true, 00:11:38.737 "data_offset": 2048, 00:11:38.737 "data_size": 63488 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "name": "pt2", 00:11:38.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.737 "is_configured": true, 00:11:38.737 "data_offset": 2048, 00:11:38.737 "data_size": 63488 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "name": "pt3", 00:11:38.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.737 "is_configured": true, 00:11:38.737 "data_offset": 2048, 00:11:38.737 "data_size": 63488 00:11:38.737 }, 00:11:38.737 { 00:11:38.737 "name": "pt4", 00:11:38.737 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.737 "is_configured": true, 00:11:38.737 "data_offset": 2048, 00:11:38.737 "data_size": 63488 00:11:38.737 } 00:11:38.737 ] 00:11:38.737 } 00:11:38.737 } 00:11:38.737 }' 00:11:38.737 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.998 pt2 00:11:38.998 pt3 00:11:38.998 pt4' 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.998 21:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.998 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.259 [2024-11-26 21:18:57.169843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c7348e29-7fd0-4610-84cc-d5af02a39048 '!=' c7348e29-7fd0-4610-84cc-d5af02a39048 ']' 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.259 [2024-11-26 21:18:57.217516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.259 "name": "raid_bdev1", 00:11:39.259 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:39.259 "strip_size_kb": 0, 00:11:39.259 "state": "online", 00:11:39.259 "raid_level": "raid1", 00:11:39.259 "superblock": true, 00:11:39.259 "num_base_bdevs": 4, 00:11:39.259 "num_base_bdevs_discovered": 3, 00:11:39.259 "num_base_bdevs_operational": 3, 00:11:39.259 "base_bdevs_list": [ 00:11:39.259 { 00:11:39.259 "name": null, 00:11:39.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.259 "is_configured": false, 00:11:39.259 "data_offset": 0, 00:11:39.259 "data_size": 63488 00:11:39.259 }, 00:11:39.259 { 00:11:39.259 "name": "pt2", 00:11:39.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.259 "is_configured": true, 00:11:39.259 "data_offset": 2048, 00:11:39.259 "data_size": 63488 00:11:39.259 }, 00:11:39.259 { 00:11:39.259 "name": "pt3", 00:11:39.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.259 "is_configured": true, 00:11:39.259 "data_offset": 2048, 00:11:39.259 "data_size": 63488 00:11:39.259 }, 00:11:39.259 { 00:11:39.259 "name": "pt4", 00:11:39.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.259 "is_configured": true, 00:11:39.259 "data_offset": 2048, 00:11:39.259 "data_size": 63488 00:11:39.259 } 00:11:39.259 ] 00:11:39.259 }' 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.259 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.550 [2024-11-26 21:18:57.688649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.550 [2024-11-26 21:18:57.688698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.550 [2024-11-26 21:18:57.688802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.550 [2024-11-26 21:18:57.688894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.550 [2024-11-26 21:18:57.688915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.550 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.810 [2024-11-26 21:18:57.776444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.810 [2024-11-26 21:18:57.776507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.810 [2024-11-26 21:18:57.776528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:39.810 [2024-11-26 21:18:57.776538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.810 [2024-11-26 21:18:57.779134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.810 [2024-11-26 21:18:57.779167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.810 [2024-11-26 21:18:57.779260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.810 [2024-11-26 21:18:57.779316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.810 pt2 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.810 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.810 "name": "raid_bdev1", 00:11:39.810 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:39.810 "strip_size_kb": 0, 00:11:39.810 "state": "configuring", 00:11:39.810 "raid_level": "raid1", 00:11:39.810 "superblock": true, 00:11:39.810 "num_base_bdevs": 4, 00:11:39.810 "num_base_bdevs_discovered": 1, 00:11:39.810 "num_base_bdevs_operational": 3, 00:11:39.810 "base_bdevs_list": [ 00:11:39.810 { 00:11:39.810 "name": null, 00:11:39.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.810 "is_configured": false, 00:11:39.810 "data_offset": 2048, 00:11:39.810 "data_size": 63488 00:11:39.810 }, 00:11:39.811 { 00:11:39.811 "name": "pt2", 00:11:39.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.811 "is_configured": true, 00:11:39.811 "data_offset": 2048, 00:11:39.811 "data_size": 63488 00:11:39.811 }, 00:11:39.811 { 00:11:39.811 "name": null, 00:11:39.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.811 "is_configured": false, 00:11:39.811 "data_offset": 2048, 00:11:39.811 "data_size": 63488 00:11:39.811 }, 00:11:39.811 { 00:11:39.811 "name": null, 00:11:39.811 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.811 "is_configured": false, 00:11:39.811 "data_offset": 2048, 00:11:39.811 "data_size": 63488 00:11:39.811 } 00:11:39.811 ] 00:11:39.811 }' 00:11:39.811 21:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.811 21:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.071 [2024-11-26 21:18:58.200171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.071 [2024-11-26 21:18:58.200255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.071 [2024-11-26 21:18:58.200280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:40.071 [2024-11-26 21:18:58.200290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.071 [2024-11-26 21:18:58.200841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.071 [2024-11-26 21:18:58.200869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.071 [2024-11-26 21:18:58.200993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:40.071 [2024-11-26 21:18:58.201021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.071 pt3 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.071 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.330 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.331 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.331 "name": "raid_bdev1", 00:11:40.331 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:40.331 "strip_size_kb": 0, 00:11:40.331 "state": "configuring", 00:11:40.331 "raid_level": "raid1", 00:11:40.331 "superblock": true, 00:11:40.331 "num_base_bdevs": 4, 00:11:40.331 "num_base_bdevs_discovered": 2, 00:11:40.331 "num_base_bdevs_operational": 3, 00:11:40.331 "base_bdevs_list": [ 00:11:40.331 { 00:11:40.331 "name": null, 00:11:40.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.331 "is_configured": false, 00:11:40.331 "data_offset": 2048, 00:11:40.331 "data_size": 63488 00:11:40.331 }, 00:11:40.331 { 00:11:40.331 "name": "pt2", 00:11:40.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.331 "is_configured": true, 00:11:40.331 "data_offset": 2048, 00:11:40.331 "data_size": 63488 00:11:40.331 }, 00:11:40.331 { 00:11:40.331 "name": "pt3", 00:11:40.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.331 "is_configured": true, 00:11:40.331 "data_offset": 2048, 00:11:40.331 "data_size": 63488 00:11:40.331 }, 00:11:40.331 { 00:11:40.331 "name": null, 00:11:40.331 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.331 "is_configured": false, 00:11:40.331 "data_offset": 2048, 00:11:40.331 "data_size": 63488 00:11:40.331 } 00:11:40.331 ] 00:11:40.331 }' 00:11:40.331 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.331 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.590 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.591 [2024-11-26 21:18:58.679521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:40.591 [2024-11-26 21:18:58.679614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.591 [2024-11-26 21:18:58.679646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:40.591 [2024-11-26 21:18:58.679656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.591 [2024-11-26 21:18:58.680242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.591 [2024-11-26 21:18:58.680263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:40.591 [2024-11-26 21:18:58.680369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:40.591 [2024-11-26 21:18:58.680394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:40.591 [2024-11-26 21:18:58.680552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.591 [2024-11-26 21:18:58.680562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.591 [2024-11-26 21:18:58.680844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:40.591 [2024-11-26 21:18:58.681027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.591 [2024-11-26 21:18:58.681043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:40.591 [2024-11-26 21:18:58.681210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.591 pt4 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.591 "name": "raid_bdev1", 00:11:40.591 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:40.591 "strip_size_kb": 0, 00:11:40.591 "state": "online", 00:11:40.591 "raid_level": "raid1", 00:11:40.591 "superblock": true, 00:11:40.591 "num_base_bdevs": 4, 00:11:40.591 "num_base_bdevs_discovered": 3, 00:11:40.591 "num_base_bdevs_operational": 3, 00:11:40.591 "base_bdevs_list": [ 00:11:40.591 { 00:11:40.591 "name": null, 00:11:40.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.591 "is_configured": false, 00:11:40.591 "data_offset": 2048, 00:11:40.591 "data_size": 63488 00:11:40.591 }, 00:11:40.591 { 00:11:40.591 "name": "pt2", 00:11:40.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.591 "is_configured": true, 00:11:40.591 "data_offset": 2048, 00:11:40.591 "data_size": 63488 00:11:40.591 }, 00:11:40.591 { 00:11:40.591 "name": "pt3", 00:11:40.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.591 "is_configured": true, 00:11:40.591 "data_offset": 2048, 00:11:40.591 "data_size": 63488 00:11:40.591 }, 00:11:40.591 { 00:11:40.591 "name": "pt4", 00:11:40.591 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.591 "is_configured": true, 00:11:40.591 "data_offset": 2048, 00:11:40.591 "data_size": 63488 00:11:40.591 } 00:11:40.591 ] 00:11:40.591 }' 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.591 21:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.162 [2024-11-26 21:18:59.114730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.162 [2024-11-26 21:18:59.114773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.162 [2024-11-26 21:18:59.114872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.162 [2024-11-26 21:18:59.114973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.162 [2024-11-26 21:18:59.114992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.162 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.162 [2024-11-26 21:18:59.186601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:41.162 [2024-11-26 21:18:59.186672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.163 [2024-11-26 21:18:59.186693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:41.163 [2024-11-26 21:18:59.186707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.163 [2024-11-26 21:18:59.189336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.163 [2024-11-26 21:18:59.189386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:41.163 [2024-11-26 21:18:59.189474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:41.163 [2024-11-26 21:18:59.189533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:41.163 [2024-11-26 21:18:59.189702] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:41.163 [2024-11-26 21:18:59.189717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.163 [2024-11-26 21:18:59.189733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:41.163 [2024-11-26 21:18:59.189803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.163 [2024-11-26 21:18:59.189915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:41.163 pt1 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.163 "name": "raid_bdev1", 00:11:41.163 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:41.163 "strip_size_kb": 0, 00:11:41.163 "state": "configuring", 00:11:41.163 "raid_level": "raid1", 00:11:41.163 "superblock": true, 00:11:41.163 "num_base_bdevs": 4, 00:11:41.163 "num_base_bdevs_discovered": 2, 00:11:41.163 "num_base_bdevs_operational": 3, 00:11:41.163 "base_bdevs_list": [ 00:11:41.163 { 00:11:41.163 "name": null, 00:11:41.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.163 "is_configured": false, 00:11:41.163 "data_offset": 2048, 00:11:41.163 "data_size": 63488 00:11:41.163 }, 00:11:41.163 { 00:11:41.163 "name": "pt2", 00:11:41.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.163 "is_configured": true, 00:11:41.163 "data_offset": 2048, 00:11:41.163 "data_size": 63488 00:11:41.163 }, 00:11:41.163 { 00:11:41.163 "name": "pt3", 00:11:41.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.163 "is_configured": true, 00:11:41.163 "data_offset": 2048, 00:11:41.163 "data_size": 63488 00:11:41.163 }, 00:11:41.163 { 00:11:41.163 "name": null, 00:11:41.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.163 "is_configured": false, 00:11:41.163 "data_offset": 2048, 00:11:41.163 "data_size": 63488 00:11:41.163 } 00:11:41.163 ] 00:11:41.163 }' 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.163 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.733 [2024-11-26 21:18:59.673788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:41.733 [2024-11-26 21:18:59.673864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.733 [2024-11-26 21:18:59.673890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:41.733 [2024-11-26 21:18:59.673899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.733 [2024-11-26 21:18:59.674440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.733 [2024-11-26 21:18:59.674467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:41.733 [2024-11-26 21:18:59.674563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:41.733 [2024-11-26 21:18:59.674586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:41.733 [2024-11-26 21:18:59.674745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:41.733 [2024-11-26 21:18:59.674753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.733 [2024-11-26 21:18:59.675048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:41.733 [2024-11-26 21:18:59.675209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:41.733 [2024-11-26 21:18:59.675229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:41.733 [2024-11-26 21:18:59.675384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.733 pt4 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.733 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.733 "name": "raid_bdev1", 00:11:41.733 "uuid": "c7348e29-7fd0-4610-84cc-d5af02a39048", 00:11:41.733 "strip_size_kb": 0, 00:11:41.733 "state": "online", 00:11:41.733 "raid_level": "raid1", 00:11:41.733 "superblock": true, 00:11:41.733 "num_base_bdevs": 4, 00:11:41.733 "num_base_bdevs_discovered": 3, 00:11:41.733 "num_base_bdevs_operational": 3, 00:11:41.733 "base_bdevs_list": [ 00:11:41.733 { 00:11:41.733 "name": null, 00:11:41.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.733 "is_configured": false, 00:11:41.733 "data_offset": 2048, 00:11:41.733 "data_size": 63488 00:11:41.733 }, 00:11:41.733 { 00:11:41.733 "name": "pt2", 00:11:41.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.733 "is_configured": true, 00:11:41.733 "data_offset": 2048, 00:11:41.733 "data_size": 63488 00:11:41.733 }, 00:11:41.733 { 00:11:41.733 "name": "pt3", 00:11:41.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.733 "is_configured": true, 00:11:41.733 "data_offset": 2048, 00:11:41.733 "data_size": 63488 00:11:41.733 }, 00:11:41.733 { 00:11:41.733 "name": "pt4", 00:11:41.733 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.733 "is_configured": true, 00:11:41.733 "data_offset": 2048, 00:11:41.733 "data_size": 63488 00:11:41.733 } 00:11:41.733 ] 00:11:41.733 }' 00:11:41.734 21:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.734 21:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.994 [2024-11-26 21:19:00.113385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.994 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c7348e29-7fd0-4610-84cc-d5af02a39048 '!=' c7348e29-7fd0-4610-84cc-d5af02a39048 ']' 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74325 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74325 ']' 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74325 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74325 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.255 killing process with pid 74325 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74325' 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74325 00:11:42.255 [2024-11-26 21:19:00.177894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.255 [2024-11-26 21:19:00.178046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.255 21:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74325 00:11:42.255 [2024-11-26 21:19:00.178139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.255 [2024-11-26 21:19:00.178154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:42.515 [2024-11-26 21:19:00.615526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.929 21:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:43.929 00:11:43.929 real 0m8.660s 00:11:43.929 user 0m13.514s 00:11:43.929 sys 0m1.513s 00:11:43.929 21:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.929 ************************************ 00:11:43.929 END TEST raid_superblock_test 00:11:43.929 ************************************ 00:11:43.929 21:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.929 21:19:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:43.929 21:19:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.929 21:19:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.929 21:19:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.929 ************************************ 00:11:43.929 START TEST raid_read_error_test 00:11:43.929 ************************************ 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kTXtcUUCZo 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74818 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74818 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74818 ']' 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.929 21:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.929 [2024-11-26 21:19:02.031419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:43.929 [2024-11-26 21:19:02.031547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74818 ] 00:11:44.189 [2024-11-26 21:19:02.205427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.448 [2024-11-26 21:19:02.345479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.448 [2024-11-26 21:19:02.587738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.448 [2024-11-26 21:19:02.587816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.709 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.709 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.709 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.709 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:44.709 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.709 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 BaseBdev1_malloc 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 true 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 [2024-11-26 21:19:02.924874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:44.969 [2024-11-26 21:19:02.924940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.969 [2024-11-26 21:19:02.924974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:44.969 [2024-11-26 21:19:02.924987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.969 [2024-11-26 21:19:02.927381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.969 [2024-11-26 21:19:02.927412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:44.969 BaseBdev1 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 BaseBdev2_malloc 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 true 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 [2024-11-26 21:19:02.999699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:44.969 [2024-11-26 21:19:02.999775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.969 [2024-11-26 21:19:02.999797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:44.969 [2024-11-26 21:19:02.999810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.969 [2024-11-26 21:19:03.002441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.969 [2024-11-26 21:19:03.002478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:44.969 BaseBdev2 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 BaseBdev3_malloc 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 true 00:11:44.969 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.970 [2024-11-26 21:19:03.084147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:44.970 [2024-11-26 21:19:03.084203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.970 [2024-11-26 21:19:03.084222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:44.970 [2024-11-26 21:19:03.084233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.970 [2024-11-26 21:19:03.086642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.970 [2024-11-26 21:19:03.086675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:44.970 BaseBdev3 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.970 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.230 BaseBdev4_malloc 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.230 true 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.230 [2024-11-26 21:19:03.181742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:45.230 [2024-11-26 21:19:03.181802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.230 [2024-11-26 21:19:03.181824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.230 [2024-11-26 21:19:03.181837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.230 [2024-11-26 21:19:03.183969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.230 [2024-11-26 21:19:03.184010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:45.230 BaseBdev4 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.230 [2024-11-26 21:19:03.193770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.230 [2024-11-26 21:19:03.195576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.230 [2024-11-26 21:19:03.195664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.230 [2024-11-26 21:19:03.195747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.230 [2024-11-26 21:19:03.196031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:45.230 [2024-11-26 21:19:03.196058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.230 [2024-11-26 21:19:03.196299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:45.230 [2024-11-26 21:19:03.196487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:45.230 [2024-11-26 21:19:03.196502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:45.230 [2024-11-26 21:19:03.196662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.230 "name": "raid_bdev1", 00:11:45.230 "uuid": "6b7fe286-faef-4eee-af0b-216f8c2158b9", 00:11:45.230 "strip_size_kb": 0, 00:11:45.230 "state": "online", 00:11:45.230 "raid_level": "raid1", 00:11:45.230 "superblock": true, 00:11:45.230 "num_base_bdevs": 4, 00:11:45.230 "num_base_bdevs_discovered": 4, 00:11:45.230 "num_base_bdevs_operational": 4, 00:11:45.230 "base_bdevs_list": [ 00:11:45.230 { 00:11:45.230 "name": "BaseBdev1", 00:11:45.230 "uuid": "afc95714-52f2-5b2a-9586-0426097b1532", 00:11:45.230 "is_configured": true, 00:11:45.230 "data_offset": 2048, 00:11:45.230 "data_size": 63488 00:11:45.230 }, 00:11:45.230 { 00:11:45.230 "name": "BaseBdev2", 00:11:45.230 "uuid": "8ed8050e-f6ae-5a4b-9feb-75e836057ac4", 00:11:45.230 "is_configured": true, 00:11:45.230 "data_offset": 2048, 00:11:45.230 "data_size": 63488 00:11:45.230 }, 00:11:45.230 { 00:11:45.230 "name": "BaseBdev3", 00:11:45.230 "uuid": "c49766ce-08e9-5b4d-9f23-5aa0d050e960", 00:11:45.230 "is_configured": true, 00:11:45.230 "data_offset": 2048, 00:11:45.230 "data_size": 63488 00:11:45.230 }, 00:11:45.230 { 00:11:45.230 "name": "BaseBdev4", 00:11:45.230 "uuid": "232e49f8-f98f-5e36-be36-6055cc3047a5", 00:11:45.230 "is_configured": true, 00:11:45.230 "data_offset": 2048, 00:11:45.230 "data_size": 63488 00:11:45.230 } 00:11:45.230 ] 00:11:45.230 }' 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.230 21:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.490 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.490 21:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.750 [2024-11-26 21:19:03.710425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.689 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.689 "name": "raid_bdev1", 00:11:46.689 "uuid": "6b7fe286-faef-4eee-af0b-216f8c2158b9", 00:11:46.689 "strip_size_kb": 0, 00:11:46.689 "state": "online", 00:11:46.689 "raid_level": "raid1", 00:11:46.689 "superblock": true, 00:11:46.689 "num_base_bdevs": 4, 00:11:46.689 "num_base_bdevs_discovered": 4, 00:11:46.689 "num_base_bdevs_operational": 4, 00:11:46.690 "base_bdevs_list": [ 00:11:46.690 { 00:11:46.690 "name": "BaseBdev1", 00:11:46.690 "uuid": "afc95714-52f2-5b2a-9586-0426097b1532", 00:11:46.690 "is_configured": true, 00:11:46.690 "data_offset": 2048, 00:11:46.690 "data_size": 63488 00:11:46.690 }, 00:11:46.690 { 00:11:46.690 "name": "BaseBdev2", 00:11:46.690 "uuid": "8ed8050e-f6ae-5a4b-9feb-75e836057ac4", 00:11:46.690 "is_configured": true, 00:11:46.690 "data_offset": 2048, 00:11:46.690 "data_size": 63488 00:11:46.690 }, 00:11:46.690 { 00:11:46.690 "name": "BaseBdev3", 00:11:46.690 "uuid": "c49766ce-08e9-5b4d-9f23-5aa0d050e960", 00:11:46.690 "is_configured": true, 00:11:46.690 "data_offset": 2048, 00:11:46.690 "data_size": 63488 00:11:46.690 }, 00:11:46.690 { 00:11:46.690 "name": "BaseBdev4", 00:11:46.690 "uuid": "232e49f8-f98f-5e36-be36-6055cc3047a5", 00:11:46.690 "is_configured": true, 00:11:46.690 "data_offset": 2048, 00:11:46.690 "data_size": 63488 00:11:46.690 } 00:11:46.690 ] 00:11:46.690 }' 00:11:46.690 21:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.690 21:19:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.950 [2024-11-26 21:19:05.097619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.950 [2024-11-26 21:19:05.097663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.950 [2024-11-26 21:19:05.100359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.950 [2024-11-26 21:19:05.100432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.950 [2024-11-26 21:19:05.100558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.950 [2024-11-26 21:19:05.100576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:46.950 { 00:11:46.950 "results": [ 00:11:46.950 { 00:11:46.950 "job": "raid_bdev1", 00:11:46.950 "core_mask": "0x1", 00:11:46.950 "workload": "randrw", 00:11:46.950 "percentage": 50, 00:11:46.950 "status": "finished", 00:11:46.950 "queue_depth": 1, 00:11:46.950 "io_size": 131072, 00:11:46.950 "runtime": 1.38802, 00:11:46.950 "iops": 10445.0944510886, 00:11:46.950 "mibps": 1305.636806386075, 00:11:46.950 "io_failed": 0, 00:11:46.950 "io_timeout": 0, 00:11:46.950 "avg_latency_us": 92.79536391407096, 00:11:46.950 "min_latency_us": 24.929257641921396, 00:11:46.950 "max_latency_us": 1752.8733624454148 00:11:46.950 } 00:11:46.950 ], 00:11:46.950 "core_count": 1 00:11:46.950 } 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74818 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74818 ']' 00:11:46.950 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74818 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74818 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.210 killing process with pid 74818 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74818' 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74818 00:11:47.210 [2024-11-26 21:19:05.144773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.210 21:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74818 00:11:47.469 [2024-11-26 21:19:05.459457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kTXtcUUCZo 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:48.850 00:11:48.850 real 0m4.711s 00:11:48.850 user 0m5.438s 00:11:48.850 sys 0m0.679s 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.850 21:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.850 ************************************ 00:11:48.850 END TEST raid_read_error_test 00:11:48.850 ************************************ 00:11:48.850 21:19:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:48.850 21:19:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:48.850 21:19:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.850 21:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.850 ************************************ 00:11:48.850 START TEST raid_write_error_test 00:11:48.850 ************************************ 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1BHGqsdRwl 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74962 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74962 00:11:48.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74962 ']' 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.850 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.851 21:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.851 [2024-11-26 21:19:06.817608] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:48.851 [2024-11-26 21:19:06.817743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74962 ] 00:11:48.851 [2024-11-26 21:19:06.990192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.110 [2024-11-26 21:19:07.093965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.370 [2024-11-26 21:19:07.290668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.370 [2024-11-26 21:19:07.290706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.629 BaseBdev1_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.629 true 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.629 [2024-11-26 21:19:07.688130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:49.629 [2024-11-26 21:19:07.688242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.629 [2024-11-26 21:19:07.688268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:49.629 [2024-11-26 21:19:07.688281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.629 [2024-11-26 21:19:07.690417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.629 [2024-11-26 21:19:07.690467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:49.629 BaseBdev1 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.629 BaseBdev2_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.629 true 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.629 [2024-11-26 21:19:07.754407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:49.629 [2024-11-26 21:19:07.754533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.629 [2024-11-26 21:19:07.754557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:49.629 [2024-11-26 21:19:07.754569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.629 [2024-11-26 21:19:07.756683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.629 [2024-11-26 21:19:07.756728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:49.629 BaseBdev2 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.629 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.889 BaseBdev3_malloc 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.889 true 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.889 [2024-11-26 21:19:07.837200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:49.889 [2024-11-26 21:19:07.837305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.889 [2024-11-26 21:19:07.837329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:49.889 [2024-11-26 21:19:07.837341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.889 [2024-11-26 21:19:07.839433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.889 [2024-11-26 21:19:07.839517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:49.889 BaseBdev3 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.889 BaseBdev4_malloc 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.889 true 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.889 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.889 [2024-11-26 21:19:07.903503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:49.889 [2024-11-26 21:19:07.903560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.889 [2024-11-26 21:19:07.903596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.889 [2024-11-26 21:19:07.903609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.889 [2024-11-26 21:19:07.905759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.890 [2024-11-26 21:19:07.905806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:49.890 BaseBdev4 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.890 [2024-11-26 21:19:07.915556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.890 [2024-11-26 21:19:07.917425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.890 [2024-11-26 21:19:07.917500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.890 [2024-11-26 21:19:07.917563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:49.890 [2024-11-26 21:19:07.917783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:49.890 [2024-11-26 21:19:07.917801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.890 [2024-11-26 21:19:07.918072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:49.890 [2024-11-26 21:19:07.918243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:49.890 [2024-11-26 21:19:07.918255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:49.890 [2024-11-26 21:19:07.918400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.890 "name": "raid_bdev1", 00:11:49.890 "uuid": "867fd4d0-a419-4296-8546-0832ac421060", 00:11:49.890 "strip_size_kb": 0, 00:11:49.890 "state": "online", 00:11:49.890 "raid_level": "raid1", 00:11:49.890 "superblock": true, 00:11:49.890 "num_base_bdevs": 4, 00:11:49.890 "num_base_bdevs_discovered": 4, 00:11:49.890 "num_base_bdevs_operational": 4, 00:11:49.890 "base_bdevs_list": [ 00:11:49.890 { 00:11:49.890 "name": "BaseBdev1", 00:11:49.890 "uuid": "8c6e74b1-b89b-5830-9017-66d2bcf3f238", 00:11:49.890 "is_configured": true, 00:11:49.890 "data_offset": 2048, 00:11:49.890 "data_size": 63488 00:11:49.890 }, 00:11:49.890 { 00:11:49.890 "name": "BaseBdev2", 00:11:49.890 "uuid": "341bf969-c4eb-5094-bfe7-bbda56ea9745", 00:11:49.890 "is_configured": true, 00:11:49.890 "data_offset": 2048, 00:11:49.890 "data_size": 63488 00:11:49.890 }, 00:11:49.890 { 00:11:49.890 "name": "BaseBdev3", 00:11:49.890 "uuid": "8387d204-0eb5-5d7b-aea8-0447e4d04039", 00:11:49.890 "is_configured": true, 00:11:49.890 "data_offset": 2048, 00:11:49.890 "data_size": 63488 00:11:49.890 }, 00:11:49.890 { 00:11:49.890 "name": "BaseBdev4", 00:11:49.890 "uuid": "6076aa35-1e3b-550e-a9f9-7a27be30c62c", 00:11:49.890 "is_configured": true, 00:11:49.890 "data_offset": 2048, 00:11:49.890 "data_size": 63488 00:11:49.890 } 00:11:49.890 ] 00:11:49.890 }' 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.890 21:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 21:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:50.459 21:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:50.459 [2024-11-26 21:19:08.444141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.397 [2024-11-26 21:19:09.362757] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:51.397 [2024-11-26 21:19:09.362827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.397 [2024-11-26 21:19:09.363073] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.397 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.397 "name": "raid_bdev1", 00:11:51.397 "uuid": "867fd4d0-a419-4296-8546-0832ac421060", 00:11:51.397 "strip_size_kb": 0, 00:11:51.397 "state": "online", 00:11:51.397 "raid_level": "raid1", 00:11:51.397 "superblock": true, 00:11:51.397 "num_base_bdevs": 4, 00:11:51.397 "num_base_bdevs_discovered": 3, 00:11:51.397 "num_base_bdevs_operational": 3, 00:11:51.397 "base_bdevs_list": [ 00:11:51.397 { 00:11:51.397 "name": null, 00:11:51.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.397 "is_configured": false, 00:11:51.397 "data_offset": 0, 00:11:51.397 "data_size": 63488 00:11:51.397 }, 00:11:51.398 { 00:11:51.398 "name": "BaseBdev2", 00:11:51.398 "uuid": "341bf969-c4eb-5094-bfe7-bbda56ea9745", 00:11:51.398 "is_configured": true, 00:11:51.398 "data_offset": 2048, 00:11:51.398 "data_size": 63488 00:11:51.398 }, 00:11:51.398 { 00:11:51.398 "name": "BaseBdev3", 00:11:51.398 "uuid": "8387d204-0eb5-5d7b-aea8-0447e4d04039", 00:11:51.398 "is_configured": true, 00:11:51.398 "data_offset": 2048, 00:11:51.398 "data_size": 63488 00:11:51.398 }, 00:11:51.398 { 00:11:51.398 "name": "BaseBdev4", 00:11:51.398 "uuid": "6076aa35-1e3b-550e-a9f9-7a27be30c62c", 00:11:51.398 "is_configured": true, 00:11:51.398 "data_offset": 2048, 00:11:51.398 "data_size": 63488 00:11:51.398 } 00:11:51.398 ] 00:11:51.398 }' 00:11:51.398 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.398 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.656 [2024-11-26 21:19:09.786308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.656 [2024-11-26 21:19:09.786344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.656 [2024-11-26 21:19:09.789063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.656 [2024-11-26 21:19:09.789121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.656 [2024-11-26 21:19:09.789227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.656 [2024-11-26 21:19:09.789241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:51.656 { 00:11:51.656 "results": [ 00:11:51.656 { 00:11:51.656 "job": "raid_bdev1", 00:11:51.656 "core_mask": "0x1", 00:11:51.656 "workload": "randrw", 00:11:51.656 "percentage": 50, 00:11:51.656 "status": "finished", 00:11:51.656 "queue_depth": 1, 00:11:51.656 "io_size": 131072, 00:11:51.656 "runtime": 1.342809, 00:11:51.656 "iops": 11485.624537815876, 00:11:51.656 "mibps": 1435.7030672269846, 00:11:51.656 "io_failed": 0, 00:11:51.656 "io_timeout": 0, 00:11:51.656 "avg_latency_us": 84.09830789211486, 00:11:51.656 "min_latency_us": 24.705676855895195, 00:11:51.656 "max_latency_us": 1502.46288209607 00:11:51.656 } 00:11:51.656 ], 00:11:51.656 "core_count": 1 00:11:51.656 } 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74962 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74962 ']' 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74962 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.656 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74962 00:11:51.915 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.915 killing process with pid 74962 00:11:51.915 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.915 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74962' 00:11:51.915 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74962 00:11:51.915 [2024-11-26 21:19:09.837355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.915 21:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74962 00:11:52.174 [2024-11-26 21:19:10.158680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1BHGqsdRwl 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:53.552 00:11:53.552 real 0m4.637s 00:11:53.552 user 0m5.401s 00:11:53.552 sys 0m0.599s 00:11:53.552 ************************************ 00:11:53.552 END TEST raid_write_error_test 00:11:53.552 ************************************ 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.552 21:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.552 21:19:11 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:53.552 21:19:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:53.552 21:19:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:53.552 21:19:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:53.552 21:19:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.552 21:19:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.552 ************************************ 00:11:53.552 START TEST raid_rebuild_test 00:11:53.552 ************************************ 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75106 00:11:53.552 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75106 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75106 ']' 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.553 21:19:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.553 [2024-11-26 21:19:11.511504] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:53.553 [2024-11-26 21:19:11.511721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:53.553 Zero copy mechanism will not be used. 00:11:53.553 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75106 ] 00:11:53.553 [2024-11-26 21:19:11.683108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.812 [2024-11-26 21:19:11.794169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.071 [2024-11-26 21:19:11.988372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.071 [2024-11-26 21:19:11.988439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.330 BaseBdev1_malloc 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.330 [2024-11-26 21:19:12.381138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:54.330 [2024-11-26 21:19:12.381224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.330 [2024-11-26 21:19:12.381257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:54.330 [2024-11-26 21:19:12.381277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.330 [2024-11-26 21:19:12.383983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.330 [2024-11-26 21:19:12.384031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:54.330 BaseBdev1 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.330 BaseBdev2_malloc 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.330 [2024-11-26 21:19:12.435105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:54.330 [2024-11-26 21:19:12.435239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.330 [2024-11-26 21:19:12.435269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:54.330 [2024-11-26 21:19:12.435282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.330 [2024-11-26 21:19:12.437418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.330 [2024-11-26 21:19:12.437466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:54.330 BaseBdev2 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.330 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.589 spare_malloc 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.589 spare_delay 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.589 [2024-11-26 21:19:12.513271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:54.589 [2024-11-26 21:19:12.513353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.589 [2024-11-26 21:19:12.513377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:54.589 [2024-11-26 21:19:12.513390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.589 [2024-11-26 21:19:12.515595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.589 [2024-11-26 21:19:12.515650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:54.589 spare 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.589 [2024-11-26 21:19:12.525288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.589 [2024-11-26 21:19:12.527085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.589 [2024-11-26 21:19:12.527183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:54.589 [2024-11-26 21:19:12.527199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:54.589 [2024-11-26 21:19:12.527452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:54.589 [2024-11-26 21:19:12.527616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:54.589 [2024-11-26 21:19:12.527640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:54.589 [2024-11-26 21:19:12.527811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.589 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.590 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.590 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.590 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.590 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.590 "name": "raid_bdev1", 00:11:54.590 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:11:54.590 "strip_size_kb": 0, 00:11:54.590 "state": "online", 00:11:54.590 "raid_level": "raid1", 00:11:54.590 "superblock": false, 00:11:54.590 "num_base_bdevs": 2, 00:11:54.590 "num_base_bdevs_discovered": 2, 00:11:54.590 "num_base_bdevs_operational": 2, 00:11:54.590 "base_bdevs_list": [ 00:11:54.590 { 00:11:54.590 "name": "BaseBdev1", 00:11:54.590 "uuid": "0a6c4e6f-427f-596f-b288-8d54c5df72f3", 00:11:54.590 "is_configured": true, 00:11:54.590 "data_offset": 0, 00:11:54.590 "data_size": 65536 00:11:54.590 }, 00:11:54.590 { 00:11:54.590 "name": "BaseBdev2", 00:11:54.590 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:11:54.590 "is_configured": true, 00:11:54.590 "data_offset": 0, 00:11:54.590 "data_size": 65536 00:11:54.590 } 00:11:54.590 ] 00:11:54.590 }' 00:11:54.590 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.590 21:19:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.849 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:54.849 21:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.849 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.849 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.108 [2024-11-26 21:19:13.008783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:55.108 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:55.109 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:55.109 [2024-11-26 21:19:13.256214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:55.368 /dev/nbd0 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.368 1+0 records in 00:11:55.368 1+0 records out 00:11:55.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389717 s, 10.5 MB/s 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:55.368 21:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:00.654 65536+0 records in 00:12:00.654 65536+0 records out 00:12:00.654 33554432 bytes (34 MB, 32 MiB) copied, 4.59571 s, 7.3 MB/s 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.654 21:19:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:00.654 [2024-11-26 21:19:18.123167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.654 [2024-11-26 21:19:18.155173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:00.654 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.655 "name": "raid_bdev1", 00:12:00.655 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:00.655 "strip_size_kb": 0, 00:12:00.655 "state": "online", 00:12:00.655 "raid_level": "raid1", 00:12:00.655 "superblock": false, 00:12:00.655 "num_base_bdevs": 2, 00:12:00.655 "num_base_bdevs_discovered": 1, 00:12:00.655 "num_base_bdevs_operational": 1, 00:12:00.655 "base_bdevs_list": [ 00:12:00.655 { 00:12:00.655 "name": null, 00:12:00.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.655 "is_configured": false, 00:12:00.655 "data_offset": 0, 00:12:00.655 "data_size": 65536 00:12:00.655 }, 00:12:00.655 { 00:12:00.655 "name": "BaseBdev2", 00:12:00.655 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:00.655 "is_configured": true, 00:12:00.655 "data_offset": 0, 00:12:00.655 "data_size": 65536 00:12:00.655 } 00:12:00.655 ] 00:12:00.655 }' 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.655 [2024-11-26 21:19:18.590483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.655 [2024-11-26 21:19:18.607866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.655 21:19:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:00.655 [2024-11-26 21:19:18.609886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.588 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.588 "name": "raid_bdev1", 00:12:01.588 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:01.588 "strip_size_kb": 0, 00:12:01.588 "state": "online", 00:12:01.588 "raid_level": "raid1", 00:12:01.588 "superblock": false, 00:12:01.588 "num_base_bdevs": 2, 00:12:01.588 "num_base_bdevs_discovered": 2, 00:12:01.588 "num_base_bdevs_operational": 2, 00:12:01.588 "process": { 00:12:01.588 "type": "rebuild", 00:12:01.588 "target": "spare", 00:12:01.588 "progress": { 00:12:01.588 "blocks": 20480, 00:12:01.588 "percent": 31 00:12:01.588 } 00:12:01.588 }, 00:12:01.588 "base_bdevs_list": [ 00:12:01.588 { 00:12:01.588 "name": "spare", 00:12:01.588 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:01.588 "is_configured": true, 00:12:01.588 "data_offset": 0, 00:12:01.588 "data_size": 65536 00:12:01.588 }, 00:12:01.588 { 00:12:01.588 "name": "BaseBdev2", 00:12:01.588 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:01.588 "is_configured": true, 00:12:01.588 "data_offset": 0, 00:12:01.588 "data_size": 65536 00:12:01.588 } 00:12:01.589 ] 00:12:01.589 }' 00:12:01.589 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.589 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.589 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.848 [2024-11-26 21:19:19.769451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.848 [2024-11-26 21:19:19.815914] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.848 [2024-11-26 21:19:19.816038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.848 [2024-11-26 21:19:19.816057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.848 [2024-11-26 21:19:19.816070] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.848 "name": "raid_bdev1", 00:12:01.848 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:01.848 "strip_size_kb": 0, 00:12:01.848 "state": "online", 00:12:01.848 "raid_level": "raid1", 00:12:01.848 "superblock": false, 00:12:01.848 "num_base_bdevs": 2, 00:12:01.848 "num_base_bdevs_discovered": 1, 00:12:01.848 "num_base_bdevs_operational": 1, 00:12:01.848 "base_bdevs_list": [ 00:12:01.848 { 00:12:01.848 "name": null, 00:12:01.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.848 "is_configured": false, 00:12:01.848 "data_offset": 0, 00:12:01.848 "data_size": 65536 00:12:01.848 }, 00:12:01.848 { 00:12:01.848 "name": "BaseBdev2", 00:12:01.848 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:01.848 "is_configured": true, 00:12:01.848 "data_offset": 0, 00:12:01.848 "data_size": 65536 00:12:01.848 } 00:12:01.848 ] 00:12:01.848 }' 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.848 21:19:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.418 "name": "raid_bdev1", 00:12:02.418 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:02.418 "strip_size_kb": 0, 00:12:02.418 "state": "online", 00:12:02.418 "raid_level": "raid1", 00:12:02.418 "superblock": false, 00:12:02.418 "num_base_bdevs": 2, 00:12:02.418 "num_base_bdevs_discovered": 1, 00:12:02.418 "num_base_bdevs_operational": 1, 00:12:02.418 "base_bdevs_list": [ 00:12:02.418 { 00:12:02.418 "name": null, 00:12:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.418 "is_configured": false, 00:12:02.418 "data_offset": 0, 00:12:02.418 "data_size": 65536 00:12:02.418 }, 00:12:02.418 { 00:12:02.418 "name": "BaseBdev2", 00:12:02.418 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:02.418 "is_configured": true, 00:12:02.418 "data_offset": 0, 00:12:02.418 "data_size": 65536 00:12:02.418 } 00:12:02.418 ] 00:12:02.418 }' 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.418 [2024-11-26 21:19:20.406069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.418 [2024-11-26 21:19:20.422180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.418 21:19:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:02.418 [2024-11-26 21:19:20.424064] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.357 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.357 "name": "raid_bdev1", 00:12:03.357 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:03.357 "strip_size_kb": 0, 00:12:03.357 "state": "online", 00:12:03.358 "raid_level": "raid1", 00:12:03.358 "superblock": false, 00:12:03.358 "num_base_bdevs": 2, 00:12:03.358 "num_base_bdevs_discovered": 2, 00:12:03.358 "num_base_bdevs_operational": 2, 00:12:03.358 "process": { 00:12:03.358 "type": "rebuild", 00:12:03.358 "target": "spare", 00:12:03.358 "progress": { 00:12:03.358 "blocks": 20480, 00:12:03.358 "percent": 31 00:12:03.358 } 00:12:03.358 }, 00:12:03.358 "base_bdevs_list": [ 00:12:03.358 { 00:12:03.358 "name": "spare", 00:12:03.358 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:03.358 "is_configured": true, 00:12:03.358 "data_offset": 0, 00:12:03.358 "data_size": 65536 00:12:03.358 }, 00:12:03.358 { 00:12:03.358 "name": "BaseBdev2", 00:12:03.358 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:03.358 "is_configured": true, 00:12:03.358 "data_offset": 0, 00:12:03.358 "data_size": 65536 00:12:03.358 } 00:12:03.358 ] 00:12:03.358 }' 00:12:03.358 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.358 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.358 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=363 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.618 "name": "raid_bdev1", 00:12:03.618 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:03.618 "strip_size_kb": 0, 00:12:03.618 "state": "online", 00:12:03.618 "raid_level": "raid1", 00:12:03.618 "superblock": false, 00:12:03.618 "num_base_bdevs": 2, 00:12:03.618 "num_base_bdevs_discovered": 2, 00:12:03.618 "num_base_bdevs_operational": 2, 00:12:03.618 "process": { 00:12:03.618 "type": "rebuild", 00:12:03.618 "target": "spare", 00:12:03.618 "progress": { 00:12:03.618 "blocks": 22528, 00:12:03.618 "percent": 34 00:12:03.618 } 00:12:03.618 }, 00:12:03.618 "base_bdevs_list": [ 00:12:03.618 { 00:12:03.618 "name": "spare", 00:12:03.618 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:03.618 "is_configured": true, 00:12:03.618 "data_offset": 0, 00:12:03.618 "data_size": 65536 00:12:03.618 }, 00:12:03.618 { 00:12:03.618 "name": "BaseBdev2", 00:12:03.618 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:03.618 "is_configured": true, 00:12:03.618 "data_offset": 0, 00:12:03.618 "data_size": 65536 00:12:03.618 } 00:12:03.618 ] 00:12:03.618 }' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.618 21:19:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.558 21:19:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.818 "name": "raid_bdev1", 00:12:04.818 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:04.818 "strip_size_kb": 0, 00:12:04.818 "state": "online", 00:12:04.818 "raid_level": "raid1", 00:12:04.818 "superblock": false, 00:12:04.818 "num_base_bdevs": 2, 00:12:04.818 "num_base_bdevs_discovered": 2, 00:12:04.818 "num_base_bdevs_operational": 2, 00:12:04.818 "process": { 00:12:04.818 "type": "rebuild", 00:12:04.818 "target": "spare", 00:12:04.818 "progress": { 00:12:04.818 "blocks": 45056, 00:12:04.818 "percent": 68 00:12:04.818 } 00:12:04.818 }, 00:12:04.818 "base_bdevs_list": [ 00:12:04.818 { 00:12:04.818 "name": "spare", 00:12:04.818 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:04.818 "is_configured": true, 00:12:04.818 "data_offset": 0, 00:12:04.818 "data_size": 65536 00:12:04.818 }, 00:12:04.818 { 00:12:04.818 "name": "BaseBdev2", 00:12:04.818 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:04.818 "is_configured": true, 00:12:04.818 "data_offset": 0, 00:12:04.818 "data_size": 65536 00:12:04.818 } 00:12:04.818 ] 00:12:04.818 }' 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.818 21:19:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:05.772 [2024-11-26 21:19:23.639046] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:05.772 [2024-11-26 21:19:23.639138] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:05.772 [2024-11-26 21:19:23.639210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.772 "name": "raid_bdev1", 00:12:05.772 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:05.772 "strip_size_kb": 0, 00:12:05.772 "state": "online", 00:12:05.772 "raid_level": "raid1", 00:12:05.772 "superblock": false, 00:12:05.772 "num_base_bdevs": 2, 00:12:05.772 "num_base_bdevs_discovered": 2, 00:12:05.772 "num_base_bdevs_operational": 2, 00:12:05.772 "base_bdevs_list": [ 00:12:05.772 { 00:12:05.772 "name": "spare", 00:12:05.772 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:05.772 "is_configured": true, 00:12:05.772 "data_offset": 0, 00:12:05.772 "data_size": 65536 00:12:05.772 }, 00:12:05.772 { 00:12:05.772 "name": "BaseBdev2", 00:12:05.772 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:05.772 "is_configured": true, 00:12:05.772 "data_offset": 0, 00:12:05.772 "data_size": 65536 00:12:05.772 } 00:12:05.772 ] 00:12:05.772 }' 00:12:05.772 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.032 21:19:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.032 "name": "raid_bdev1", 00:12:06.032 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:06.032 "strip_size_kb": 0, 00:12:06.032 "state": "online", 00:12:06.032 "raid_level": "raid1", 00:12:06.032 "superblock": false, 00:12:06.032 "num_base_bdevs": 2, 00:12:06.032 "num_base_bdevs_discovered": 2, 00:12:06.032 "num_base_bdevs_operational": 2, 00:12:06.032 "base_bdevs_list": [ 00:12:06.032 { 00:12:06.032 "name": "spare", 00:12:06.032 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:06.032 "is_configured": true, 00:12:06.032 "data_offset": 0, 00:12:06.032 "data_size": 65536 00:12:06.032 }, 00:12:06.032 { 00:12:06.032 "name": "BaseBdev2", 00:12:06.032 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:06.032 "is_configured": true, 00:12:06.032 "data_offset": 0, 00:12:06.032 "data_size": 65536 00:12:06.032 } 00:12:06.032 ] 00:12:06.032 }' 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.032 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.033 "name": "raid_bdev1", 00:12:06.033 "uuid": "75e6a28d-b5eb-4a9a-aba1-12c214df1bdb", 00:12:06.033 "strip_size_kb": 0, 00:12:06.033 "state": "online", 00:12:06.033 "raid_level": "raid1", 00:12:06.033 "superblock": false, 00:12:06.033 "num_base_bdevs": 2, 00:12:06.033 "num_base_bdevs_discovered": 2, 00:12:06.033 "num_base_bdevs_operational": 2, 00:12:06.033 "base_bdevs_list": [ 00:12:06.033 { 00:12:06.033 "name": "spare", 00:12:06.033 "uuid": "d579d539-9e63-5b5b-b554-22390ad8a63e", 00:12:06.033 "is_configured": true, 00:12:06.033 "data_offset": 0, 00:12:06.033 "data_size": 65536 00:12:06.033 }, 00:12:06.033 { 00:12:06.033 "name": "BaseBdev2", 00:12:06.033 "uuid": "8c3d2e7b-388f-5fd9-aa79-4f35c09eebe8", 00:12:06.033 "is_configured": true, 00:12:06.033 "data_offset": 0, 00:12:06.033 "data_size": 65536 00:12:06.033 } 00:12:06.033 ] 00:12:06.033 }' 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.033 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.602 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:06.602 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.602 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.602 [2024-11-26 21:19:24.584629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:06.602 [2024-11-26 21:19:24.584737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.602 [2024-11-26 21:19:24.584909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.602 [2024-11-26 21:19:24.585029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.603 [2024-11-26 21:19:24.585110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.603 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:06.863 /dev/nbd0 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.863 1+0 records in 00:12:06.863 1+0 records out 00:12:06.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270836 s, 15.1 MB/s 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:06.863 21:19:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:07.123 /dev/nbd1 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.123 1+0 records in 00:12:07.123 1+0 records out 00:12:07.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266866 s, 15.3 MB/s 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:07.123 21:19:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.383 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.642 21:19:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:07.643 21:19:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75106 00:12:07.643 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75106 ']' 00:12:07.643 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75106 00:12:07.643 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:07.643 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.643 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75106 00:12:07.902 killing process with pid 75106 00:12:07.902 Received shutdown signal, test time was about 60.000000 seconds 00:12:07.902 00:12:07.902 Latency(us) 00:12:07.902 [2024-11-26T21:19:26.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.902 [2024-11-26T21:19:26.058Z] =================================================================================================================== 00:12:07.902 [2024-11-26T21:19:26.058Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:07.902 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.902 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.902 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75106' 00:12:07.902 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75106 00:12:07.902 [2024-11-26 21:19:25.801572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.902 21:19:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75106 00:12:08.162 [2024-11-26 21:19:26.093348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:09.101 00:12:09.101 real 0m15.756s 00:12:09.101 user 0m17.346s 00:12:09.101 sys 0m3.199s 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.101 ************************************ 00:12:09.101 END TEST raid_rebuild_test 00:12:09.101 ************************************ 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 21:19:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:09.101 21:19:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:09.101 21:19:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.101 21:19:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 ************************************ 00:12:09.101 START TEST raid_rebuild_test_sb 00:12:09.101 ************************************ 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:09.101 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:09.102 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75529 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75529 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75529 ']' 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.362 21:19:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.362 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.362 Zero copy mechanism will not be used. 00:12:09.362 [2024-11-26 21:19:27.340558] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:09.362 [2024-11-26 21:19:27.340672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ] 00:12:09.362 [2024-11-26 21:19:27.513512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.621 [2024-11-26 21:19:27.628024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.881 [2024-11-26 21:19:27.824135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.881 [2024-11-26 21:19:27.824202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.141 BaseBdev1_malloc 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.141 [2024-11-26 21:19:28.205839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.141 [2024-11-26 21:19:28.205981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.141 [2024-11-26 21:19:28.206027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.141 [2024-11-26 21:19:28.206065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.141 [2024-11-26 21:19:28.208254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.141 [2024-11-26 21:19:28.208350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.141 BaseBdev1 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.141 BaseBdev2_malloc 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.141 [2024-11-26 21:19:28.265267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:10.141 [2024-11-26 21:19:28.265395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.141 [2024-11-26 21:19:28.265440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:10.141 [2024-11-26 21:19:28.265477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.141 [2024-11-26 21:19:28.267605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.141 [2024-11-26 21:19:28.267706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.141 BaseBdev2 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.141 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.401 spare_malloc 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.401 spare_delay 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.401 [2024-11-26 21:19:28.342307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.401 [2024-11-26 21:19:28.342428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.401 [2024-11-26 21:19:28.342468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:10.401 [2024-11-26 21:19:28.342525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.401 [2024-11-26 21:19:28.344624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.401 [2024-11-26 21:19:28.344712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.401 spare 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.401 [2024-11-26 21:19:28.354357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.401 [2024-11-26 21:19:28.356164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.401 [2024-11-26 21:19:28.356418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:10.401 [2024-11-26 21:19:28.356475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.401 [2024-11-26 21:19:28.356745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.401 [2024-11-26 21:19:28.356975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:10.401 [2024-11-26 21:19:28.357024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:10.401 [2024-11-26 21:19:28.357223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.401 "name": "raid_bdev1", 00:12:10.401 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:10.401 "strip_size_kb": 0, 00:12:10.401 "state": "online", 00:12:10.401 "raid_level": "raid1", 00:12:10.401 "superblock": true, 00:12:10.401 "num_base_bdevs": 2, 00:12:10.401 "num_base_bdevs_discovered": 2, 00:12:10.401 "num_base_bdevs_operational": 2, 00:12:10.401 "base_bdevs_list": [ 00:12:10.401 { 00:12:10.401 "name": "BaseBdev1", 00:12:10.401 "uuid": "765d7a4c-8706-5308-9446-70f83b9abd74", 00:12:10.401 "is_configured": true, 00:12:10.401 "data_offset": 2048, 00:12:10.401 "data_size": 63488 00:12:10.401 }, 00:12:10.401 { 00:12:10.401 "name": "BaseBdev2", 00:12:10.401 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:10.401 "is_configured": true, 00:12:10.401 "data_offset": 2048, 00:12:10.401 "data_size": 63488 00:12:10.401 } 00:12:10.401 ] 00:12:10.401 }' 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.401 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.971 [2024-11-26 21:19:28.825826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:10.971 21:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:10.971 [2024-11-26 21:19:29.097174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:10.971 /dev/nbd0 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.231 1+0 records in 00:12:11.231 1+0 records out 00:12:11.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548938 s, 7.5 MB/s 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:11.231 21:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:15.428 63488+0 records in 00:12:15.428 63488+0 records out 00:12:15.428 32505856 bytes (33 MB, 31 MiB) copied, 3.99993 s, 8.1 MB/s 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.428 [2024-11-26 21:19:33.376687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.428 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.429 [2024-11-26 21:19:33.392750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.429 "name": "raid_bdev1", 00:12:15.429 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:15.429 "strip_size_kb": 0, 00:12:15.429 "state": "online", 00:12:15.429 "raid_level": "raid1", 00:12:15.429 "superblock": true, 00:12:15.429 "num_base_bdevs": 2, 00:12:15.429 "num_base_bdevs_discovered": 1, 00:12:15.429 "num_base_bdevs_operational": 1, 00:12:15.429 "base_bdevs_list": [ 00:12:15.429 { 00:12:15.429 "name": null, 00:12:15.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.429 "is_configured": false, 00:12:15.429 "data_offset": 0, 00:12:15.429 "data_size": 63488 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "name": "BaseBdev2", 00:12:15.429 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:15.429 "is_configured": true, 00:12:15.429 "data_offset": 2048, 00:12:15.429 "data_size": 63488 00:12:15.429 } 00:12:15.429 ] 00:12:15.429 }' 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.429 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.998 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:15.998 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.998 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.998 [2024-11-26 21:19:33.875986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.998 [2024-11-26 21:19:33.891428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:15.998 21:19:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.998 [2024-11-26 21:19:33.893274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.998 21:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.936 "name": "raid_bdev1", 00:12:16.936 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:16.936 "strip_size_kb": 0, 00:12:16.936 "state": "online", 00:12:16.936 "raid_level": "raid1", 00:12:16.936 "superblock": true, 00:12:16.936 "num_base_bdevs": 2, 00:12:16.936 "num_base_bdevs_discovered": 2, 00:12:16.936 "num_base_bdevs_operational": 2, 00:12:16.936 "process": { 00:12:16.936 "type": "rebuild", 00:12:16.936 "target": "spare", 00:12:16.936 "progress": { 00:12:16.936 "blocks": 20480, 00:12:16.936 "percent": 32 00:12:16.936 } 00:12:16.936 }, 00:12:16.936 "base_bdevs_list": [ 00:12:16.936 { 00:12:16.936 "name": "spare", 00:12:16.936 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:16.936 "is_configured": true, 00:12:16.936 "data_offset": 2048, 00:12:16.936 "data_size": 63488 00:12:16.936 }, 00:12:16.936 { 00:12:16.936 "name": "BaseBdev2", 00:12:16.936 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:16.936 "is_configured": true, 00:12:16.936 "data_offset": 2048, 00:12:16.936 "data_size": 63488 00:12:16.936 } 00:12:16.936 ] 00:12:16.936 }' 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.936 21:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.936 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.936 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.937 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:16.937 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.937 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.937 [2024-11-26 21:19:35.057259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.196 [2024-11-26 21:19:35.098259] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:17.196 [2024-11-26 21:19:35.098338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.196 [2024-11-26 21:19:35.098352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.196 [2024-11-26 21:19:35.098361] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:17.196 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.197 "name": "raid_bdev1", 00:12:17.197 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:17.197 "strip_size_kb": 0, 00:12:17.197 "state": "online", 00:12:17.197 "raid_level": "raid1", 00:12:17.197 "superblock": true, 00:12:17.197 "num_base_bdevs": 2, 00:12:17.197 "num_base_bdevs_discovered": 1, 00:12:17.197 "num_base_bdevs_operational": 1, 00:12:17.197 "base_bdevs_list": [ 00:12:17.197 { 00:12:17.197 "name": null, 00:12:17.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.197 "is_configured": false, 00:12:17.197 "data_offset": 0, 00:12:17.197 "data_size": 63488 00:12:17.197 }, 00:12:17.197 { 00:12:17.197 "name": "BaseBdev2", 00:12:17.197 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:17.197 "is_configured": true, 00:12:17.197 "data_offset": 2048, 00:12:17.197 "data_size": 63488 00:12:17.197 } 00:12:17.197 ] 00:12:17.197 }' 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.197 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.457 "name": "raid_bdev1", 00:12:17.457 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:17.457 "strip_size_kb": 0, 00:12:17.457 "state": "online", 00:12:17.457 "raid_level": "raid1", 00:12:17.457 "superblock": true, 00:12:17.457 "num_base_bdevs": 2, 00:12:17.457 "num_base_bdevs_discovered": 1, 00:12:17.457 "num_base_bdevs_operational": 1, 00:12:17.457 "base_bdevs_list": [ 00:12:17.457 { 00:12:17.457 "name": null, 00:12:17.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.457 "is_configured": false, 00:12:17.457 "data_offset": 0, 00:12:17.457 "data_size": 63488 00:12:17.457 }, 00:12:17.457 { 00:12:17.457 "name": "BaseBdev2", 00:12:17.457 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:17.457 "is_configured": true, 00:12:17.457 "data_offset": 2048, 00:12:17.457 "data_size": 63488 00:12:17.457 } 00:12:17.457 ] 00:12:17.457 }' 00:12:17.457 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.716 [2024-11-26 21:19:35.660365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.716 [2024-11-26 21:19:35.676456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.716 21:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:17.716 [2024-11-26 21:19:35.678227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.655 "name": "raid_bdev1", 00:12:18.655 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:18.655 "strip_size_kb": 0, 00:12:18.655 "state": "online", 00:12:18.655 "raid_level": "raid1", 00:12:18.655 "superblock": true, 00:12:18.655 "num_base_bdevs": 2, 00:12:18.655 "num_base_bdevs_discovered": 2, 00:12:18.655 "num_base_bdevs_operational": 2, 00:12:18.655 "process": { 00:12:18.655 "type": "rebuild", 00:12:18.655 "target": "spare", 00:12:18.655 "progress": { 00:12:18.655 "blocks": 20480, 00:12:18.655 "percent": 32 00:12:18.655 } 00:12:18.655 }, 00:12:18.655 "base_bdevs_list": [ 00:12:18.655 { 00:12:18.655 "name": "spare", 00:12:18.655 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:18.655 "is_configured": true, 00:12:18.655 "data_offset": 2048, 00:12:18.655 "data_size": 63488 00:12:18.655 }, 00:12:18.655 { 00:12:18.655 "name": "BaseBdev2", 00:12:18.655 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:18.655 "is_configured": true, 00:12:18.655 "data_offset": 2048, 00:12:18.655 "data_size": 63488 00:12:18.655 } 00:12:18.655 ] 00:12:18.655 }' 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.655 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:18.915 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.915 "name": "raid_bdev1", 00:12:18.915 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:18.915 "strip_size_kb": 0, 00:12:18.915 "state": "online", 00:12:18.915 "raid_level": "raid1", 00:12:18.915 "superblock": true, 00:12:18.915 "num_base_bdevs": 2, 00:12:18.915 "num_base_bdevs_discovered": 2, 00:12:18.915 "num_base_bdevs_operational": 2, 00:12:18.915 "process": { 00:12:18.915 "type": "rebuild", 00:12:18.915 "target": "spare", 00:12:18.915 "progress": { 00:12:18.915 "blocks": 22528, 00:12:18.915 "percent": 35 00:12:18.915 } 00:12:18.915 }, 00:12:18.915 "base_bdevs_list": [ 00:12:18.915 { 00:12:18.915 "name": "spare", 00:12:18.915 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:18.915 "is_configured": true, 00:12:18.915 "data_offset": 2048, 00:12:18.915 "data_size": 63488 00:12:18.915 }, 00:12:18.915 { 00:12:18.915 "name": "BaseBdev2", 00:12:18.915 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:18.915 "is_configured": true, 00:12:18.915 "data_offset": 2048, 00:12:18.915 "data_size": 63488 00:12:18.915 } 00:12:18.915 ] 00:12:18.915 }' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.915 21:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.856 "name": "raid_bdev1", 00:12:19.856 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:19.856 "strip_size_kb": 0, 00:12:19.856 "state": "online", 00:12:19.856 "raid_level": "raid1", 00:12:19.856 "superblock": true, 00:12:19.856 "num_base_bdevs": 2, 00:12:19.856 "num_base_bdevs_discovered": 2, 00:12:19.856 "num_base_bdevs_operational": 2, 00:12:19.856 "process": { 00:12:19.856 "type": "rebuild", 00:12:19.856 "target": "spare", 00:12:19.856 "progress": { 00:12:19.856 "blocks": 45056, 00:12:19.856 "percent": 70 00:12:19.856 } 00:12:19.856 }, 00:12:19.856 "base_bdevs_list": [ 00:12:19.856 { 00:12:19.856 "name": "spare", 00:12:19.856 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:19.856 "is_configured": true, 00:12:19.856 "data_offset": 2048, 00:12:19.856 "data_size": 63488 00:12:19.856 }, 00:12:19.856 { 00:12:19.856 "name": "BaseBdev2", 00:12:19.856 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:19.856 "is_configured": true, 00:12:19.856 "data_offset": 2048, 00:12:19.856 "data_size": 63488 00:12:19.856 } 00:12:19.856 ] 00:12:19.856 }' 00:12:19.856 21:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.117 21:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.117 21:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.117 21:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.117 21:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.686 [2024-11-26 21:19:38.790797] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:20.686 [2024-11-26 21:19:38.790875] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:20.686 [2024-11-26 21:19:38.791006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.946 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.946 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.946 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.946 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.946 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.946 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.206 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.207 "name": "raid_bdev1", 00:12:21.207 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:21.207 "strip_size_kb": 0, 00:12:21.207 "state": "online", 00:12:21.207 "raid_level": "raid1", 00:12:21.207 "superblock": true, 00:12:21.207 "num_base_bdevs": 2, 00:12:21.207 "num_base_bdevs_discovered": 2, 00:12:21.207 "num_base_bdevs_operational": 2, 00:12:21.207 "base_bdevs_list": [ 00:12:21.207 { 00:12:21.207 "name": "spare", 00:12:21.207 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:21.207 "is_configured": true, 00:12:21.207 "data_offset": 2048, 00:12:21.207 "data_size": 63488 00:12:21.207 }, 00:12:21.207 { 00:12:21.207 "name": "BaseBdev2", 00:12:21.207 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:21.207 "is_configured": true, 00:12:21.207 "data_offset": 2048, 00:12:21.207 "data_size": 63488 00:12:21.207 } 00:12:21.207 ] 00:12:21.207 }' 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.207 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.207 "name": "raid_bdev1", 00:12:21.207 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:21.207 "strip_size_kb": 0, 00:12:21.207 "state": "online", 00:12:21.207 "raid_level": "raid1", 00:12:21.207 "superblock": true, 00:12:21.207 "num_base_bdevs": 2, 00:12:21.207 "num_base_bdevs_discovered": 2, 00:12:21.208 "num_base_bdevs_operational": 2, 00:12:21.208 "base_bdevs_list": [ 00:12:21.208 { 00:12:21.208 "name": "spare", 00:12:21.208 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:21.208 "is_configured": true, 00:12:21.208 "data_offset": 2048, 00:12:21.208 "data_size": 63488 00:12:21.208 }, 00:12:21.208 { 00:12:21.208 "name": "BaseBdev2", 00:12:21.208 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:21.208 "is_configured": true, 00:12:21.208 "data_offset": 2048, 00:12:21.208 "data_size": 63488 00:12:21.208 } 00:12:21.208 ] 00:12:21.208 }' 00:12:21.208 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.208 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.208 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.469 "name": "raid_bdev1", 00:12:21.469 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:21.469 "strip_size_kb": 0, 00:12:21.469 "state": "online", 00:12:21.469 "raid_level": "raid1", 00:12:21.469 "superblock": true, 00:12:21.469 "num_base_bdevs": 2, 00:12:21.469 "num_base_bdevs_discovered": 2, 00:12:21.469 "num_base_bdevs_operational": 2, 00:12:21.469 "base_bdevs_list": [ 00:12:21.469 { 00:12:21.469 "name": "spare", 00:12:21.469 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:21.469 "is_configured": true, 00:12:21.469 "data_offset": 2048, 00:12:21.469 "data_size": 63488 00:12:21.469 }, 00:12:21.469 { 00:12:21.469 "name": "BaseBdev2", 00:12:21.469 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:21.469 "is_configured": true, 00:12:21.469 "data_offset": 2048, 00:12:21.469 "data_size": 63488 00:12:21.469 } 00:12:21.469 ] 00:12:21.469 }' 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.469 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.729 [2024-11-26 21:19:39.765707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.729 [2024-11-26 21:19:39.765793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.729 [2024-11-26 21:19:39.765895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.729 [2024-11-26 21:19:39.766008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.729 [2024-11-26 21:19:39.766051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:21.729 21:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:21.989 /dev/nbd0 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:21.989 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.990 1+0 records in 00:12:21.990 1+0 records out 00:12:21.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408307 s, 10.0 MB/s 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:21.990 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:22.250 /dev/nbd1 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.250 1+0 records in 00:12:22.250 1+0 records out 00:12:22.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506772 s, 8.1 MB/s 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.250 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.511 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.771 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.771 [2024-11-26 21:19:40.921392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:22.771 [2024-11-26 21:19:40.921453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.771 [2024-11-26 21:19:40.921499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.771 [2024-11-26 21:19:40.921510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.771 [2024-11-26 21:19:40.924085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.771 [2024-11-26 21:19:40.924164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:22.771 [2024-11-26 21:19:40.924345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:22.771 [2024-11-26 21:19:40.924429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.771 [2024-11-26 21:19:40.924593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.031 spare 00:12:23.031 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.031 21:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:23.031 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.031 21:19:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.031 [2024-11-26 21:19:41.024542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:23.031 [2024-11-26 21:19:41.024575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.031 [2024-11-26 21:19:41.024865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:23.031 [2024-11-26 21:19:41.025099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:23.031 [2024-11-26 21:19:41.025111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:23.031 [2024-11-26 21:19:41.025309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.031 "name": "raid_bdev1", 00:12:23.031 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:23.031 "strip_size_kb": 0, 00:12:23.031 "state": "online", 00:12:23.031 "raid_level": "raid1", 00:12:23.031 "superblock": true, 00:12:23.031 "num_base_bdevs": 2, 00:12:23.031 "num_base_bdevs_discovered": 2, 00:12:23.031 "num_base_bdevs_operational": 2, 00:12:23.031 "base_bdevs_list": [ 00:12:23.031 { 00:12:23.031 "name": "spare", 00:12:23.031 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:23.031 "is_configured": true, 00:12:23.031 "data_offset": 2048, 00:12:23.031 "data_size": 63488 00:12:23.031 }, 00:12:23.031 { 00:12:23.031 "name": "BaseBdev2", 00:12:23.031 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:23.031 "is_configured": true, 00:12:23.031 "data_offset": 2048, 00:12:23.031 "data_size": 63488 00:12:23.031 } 00:12:23.031 ] 00:12:23.031 }' 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.031 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.291 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.291 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.291 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.291 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.291 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.551 "name": "raid_bdev1", 00:12:23.551 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:23.551 "strip_size_kb": 0, 00:12:23.551 "state": "online", 00:12:23.551 "raid_level": "raid1", 00:12:23.551 "superblock": true, 00:12:23.551 "num_base_bdevs": 2, 00:12:23.551 "num_base_bdevs_discovered": 2, 00:12:23.551 "num_base_bdevs_operational": 2, 00:12:23.551 "base_bdevs_list": [ 00:12:23.551 { 00:12:23.551 "name": "spare", 00:12:23.551 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:23.551 "is_configured": true, 00:12:23.551 "data_offset": 2048, 00:12:23.551 "data_size": 63488 00:12:23.551 }, 00:12:23.551 { 00:12:23.551 "name": "BaseBdev2", 00:12:23.551 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:23.551 "is_configured": true, 00:12:23.551 "data_offset": 2048, 00:12:23.551 "data_size": 63488 00:12:23.551 } 00:12:23.551 ] 00:12:23.551 }' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.551 [2024-11-26 21:19:41.640294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.551 "name": "raid_bdev1", 00:12:23.551 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:23.551 "strip_size_kb": 0, 00:12:23.551 "state": "online", 00:12:23.551 "raid_level": "raid1", 00:12:23.551 "superblock": true, 00:12:23.551 "num_base_bdevs": 2, 00:12:23.551 "num_base_bdevs_discovered": 1, 00:12:23.551 "num_base_bdevs_operational": 1, 00:12:23.551 "base_bdevs_list": [ 00:12:23.551 { 00:12:23.551 "name": null, 00:12:23.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.551 "is_configured": false, 00:12:23.551 "data_offset": 0, 00:12:23.551 "data_size": 63488 00:12:23.551 }, 00:12:23.551 { 00:12:23.551 "name": "BaseBdev2", 00:12:23.551 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:23.551 "is_configured": true, 00:12:23.551 "data_offset": 2048, 00:12:23.551 "data_size": 63488 00:12:23.551 } 00:12:23.551 ] 00:12:23.551 }' 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.551 21:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.120 21:19:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.120 21:19:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.120 21:19:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.120 [2024-11-26 21:19:42.095696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.120 [2024-11-26 21:19:42.095944] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:24.120 [2024-11-26 21:19:42.096056] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:24.120 [2024-11-26 21:19:42.096123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.120 [2024-11-26 21:19:42.112613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:24.120 21:19:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.120 21:19:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:24.120 [2024-11-26 21:19:42.114559] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.060 "name": "raid_bdev1", 00:12:25.060 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:25.060 "strip_size_kb": 0, 00:12:25.060 "state": "online", 00:12:25.060 "raid_level": "raid1", 00:12:25.060 "superblock": true, 00:12:25.060 "num_base_bdevs": 2, 00:12:25.060 "num_base_bdevs_discovered": 2, 00:12:25.060 "num_base_bdevs_operational": 2, 00:12:25.060 "process": { 00:12:25.060 "type": "rebuild", 00:12:25.060 "target": "spare", 00:12:25.060 "progress": { 00:12:25.060 "blocks": 20480, 00:12:25.060 "percent": 32 00:12:25.060 } 00:12:25.060 }, 00:12:25.060 "base_bdevs_list": [ 00:12:25.060 { 00:12:25.060 "name": "spare", 00:12:25.060 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:25.060 "is_configured": true, 00:12:25.060 "data_offset": 2048, 00:12:25.060 "data_size": 63488 00:12:25.060 }, 00:12:25.060 { 00:12:25.060 "name": "BaseBdev2", 00:12:25.060 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:25.060 "is_configured": true, 00:12:25.060 "data_offset": 2048, 00:12:25.060 "data_size": 63488 00:12:25.060 } 00:12:25.060 ] 00:12:25.060 }' 00:12:25.060 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.320 [2024-11-26 21:19:43.278012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.320 [2024-11-26 21:19:43.319363] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.320 [2024-11-26 21:19:43.319435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.320 [2024-11-26 21:19:43.319449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.320 [2024-11-26 21:19:43.319458] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.320 "name": "raid_bdev1", 00:12:25.320 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:25.320 "strip_size_kb": 0, 00:12:25.320 "state": "online", 00:12:25.320 "raid_level": "raid1", 00:12:25.320 "superblock": true, 00:12:25.320 "num_base_bdevs": 2, 00:12:25.320 "num_base_bdevs_discovered": 1, 00:12:25.320 "num_base_bdevs_operational": 1, 00:12:25.320 "base_bdevs_list": [ 00:12:25.320 { 00:12:25.320 "name": null, 00:12:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.320 "is_configured": false, 00:12:25.320 "data_offset": 0, 00:12:25.320 "data_size": 63488 00:12:25.320 }, 00:12:25.320 { 00:12:25.320 "name": "BaseBdev2", 00:12:25.320 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:25.320 "is_configured": true, 00:12:25.320 "data_offset": 2048, 00:12:25.320 "data_size": 63488 00:12:25.320 } 00:12:25.320 ] 00:12:25.320 }' 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.320 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.889 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.889 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.889 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.889 [2024-11-26 21:19:43.813523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.889 [2024-11-26 21:19:43.813633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.889 [2024-11-26 21:19:43.813672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:25.889 [2024-11-26 21:19:43.813702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.889 [2024-11-26 21:19:43.814216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.889 [2024-11-26 21:19:43.814287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.889 [2024-11-26 21:19:43.814413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:25.889 [2024-11-26 21:19:43.814459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:25.889 [2024-11-26 21:19:43.814502] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:25.889 [2024-11-26 21:19:43.814562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.889 [2024-11-26 21:19:43.830188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:25.889 spare 00:12:25.889 21:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.889 [2024-11-26 21:19:43.832104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.889 21:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.829 "name": "raid_bdev1", 00:12:26.829 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:26.829 "strip_size_kb": 0, 00:12:26.829 "state": "online", 00:12:26.829 "raid_level": "raid1", 00:12:26.829 "superblock": true, 00:12:26.829 "num_base_bdevs": 2, 00:12:26.829 "num_base_bdevs_discovered": 2, 00:12:26.829 "num_base_bdevs_operational": 2, 00:12:26.829 "process": { 00:12:26.829 "type": "rebuild", 00:12:26.829 "target": "spare", 00:12:26.829 "progress": { 00:12:26.829 "blocks": 20480, 00:12:26.829 "percent": 32 00:12:26.829 } 00:12:26.829 }, 00:12:26.829 "base_bdevs_list": [ 00:12:26.829 { 00:12:26.829 "name": "spare", 00:12:26.829 "uuid": "62e8fa51-f21f-5d62-bf25-f8a159fe2381", 00:12:26.829 "is_configured": true, 00:12:26.829 "data_offset": 2048, 00:12:26.829 "data_size": 63488 00:12:26.829 }, 00:12:26.829 { 00:12:26.829 "name": "BaseBdev2", 00:12:26.829 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:26.829 "is_configured": true, 00:12:26.829 "data_offset": 2048, 00:12:26.829 "data_size": 63488 00:12:26.829 } 00:12:26.829 ] 00:12:26.829 }' 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.829 21:19:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.089 [2024-11-26 21:19:44.984441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.089 [2024-11-26 21:19:45.036819] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.089 [2024-11-26 21:19:45.036881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.089 [2024-11-26 21:19:45.036899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.089 [2024-11-26 21:19:45.036906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.089 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.089 "name": "raid_bdev1", 00:12:27.089 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:27.089 "strip_size_kb": 0, 00:12:27.089 "state": "online", 00:12:27.089 "raid_level": "raid1", 00:12:27.089 "superblock": true, 00:12:27.089 "num_base_bdevs": 2, 00:12:27.089 "num_base_bdevs_discovered": 1, 00:12:27.089 "num_base_bdevs_operational": 1, 00:12:27.089 "base_bdevs_list": [ 00:12:27.089 { 00:12:27.090 "name": null, 00:12:27.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.090 "is_configured": false, 00:12:27.090 "data_offset": 0, 00:12:27.090 "data_size": 63488 00:12:27.090 }, 00:12:27.090 { 00:12:27.090 "name": "BaseBdev2", 00:12:27.090 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:27.090 "is_configured": true, 00:12:27.090 "data_offset": 2048, 00:12:27.090 "data_size": 63488 00:12:27.090 } 00:12:27.090 ] 00:12:27.090 }' 00:12:27.090 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.090 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.659 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.659 "name": "raid_bdev1", 00:12:27.659 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:27.659 "strip_size_kb": 0, 00:12:27.659 "state": "online", 00:12:27.659 "raid_level": "raid1", 00:12:27.659 "superblock": true, 00:12:27.659 "num_base_bdevs": 2, 00:12:27.659 "num_base_bdevs_discovered": 1, 00:12:27.659 "num_base_bdevs_operational": 1, 00:12:27.659 "base_bdevs_list": [ 00:12:27.659 { 00:12:27.660 "name": null, 00:12:27.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.660 "is_configured": false, 00:12:27.660 "data_offset": 0, 00:12:27.660 "data_size": 63488 00:12:27.660 }, 00:12:27.660 { 00:12:27.660 "name": "BaseBdev2", 00:12:27.660 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:27.660 "is_configured": true, 00:12:27.660 "data_offset": 2048, 00:12:27.660 "data_size": 63488 00:12:27.660 } 00:12:27.660 ] 00:12:27.660 }' 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.660 [2024-11-26 21:19:45.664086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:27.660 [2024-11-26 21:19:45.664142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.660 [2024-11-26 21:19:45.664170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:27.660 [2024-11-26 21:19:45.664189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.660 [2024-11-26 21:19:45.664610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.660 [2024-11-26 21:19:45.664639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.660 [2024-11-26 21:19:45.664733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:27.660 [2024-11-26 21:19:45.664746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:27.660 [2024-11-26 21:19:45.664758] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:27.660 [2024-11-26 21:19:45.664769] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:27.660 BaseBdev1 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.660 21:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:28.599 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.600 "name": "raid_bdev1", 00:12:28.600 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:28.600 "strip_size_kb": 0, 00:12:28.600 "state": "online", 00:12:28.600 "raid_level": "raid1", 00:12:28.600 "superblock": true, 00:12:28.600 "num_base_bdevs": 2, 00:12:28.600 "num_base_bdevs_discovered": 1, 00:12:28.600 "num_base_bdevs_operational": 1, 00:12:28.600 "base_bdevs_list": [ 00:12:28.600 { 00:12:28.600 "name": null, 00:12:28.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.600 "is_configured": false, 00:12:28.600 "data_offset": 0, 00:12:28.600 "data_size": 63488 00:12:28.600 }, 00:12:28.600 { 00:12:28.600 "name": "BaseBdev2", 00:12:28.600 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:28.600 "is_configured": true, 00:12:28.600 "data_offset": 2048, 00:12:28.600 "data_size": 63488 00:12:28.600 } 00:12:28.600 ] 00:12:28.600 }' 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.600 21:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.169 "name": "raid_bdev1", 00:12:29.169 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:29.169 "strip_size_kb": 0, 00:12:29.169 "state": "online", 00:12:29.169 "raid_level": "raid1", 00:12:29.169 "superblock": true, 00:12:29.169 "num_base_bdevs": 2, 00:12:29.169 "num_base_bdevs_discovered": 1, 00:12:29.169 "num_base_bdevs_operational": 1, 00:12:29.169 "base_bdevs_list": [ 00:12:29.169 { 00:12:29.169 "name": null, 00:12:29.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.169 "is_configured": false, 00:12:29.169 "data_offset": 0, 00:12:29.169 "data_size": 63488 00:12:29.169 }, 00:12:29.169 { 00:12:29.169 "name": "BaseBdev2", 00:12:29.169 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:29.169 "is_configured": true, 00:12:29.169 "data_offset": 2048, 00:12:29.169 "data_size": 63488 00:12:29.169 } 00:12:29.169 ] 00:12:29.169 }' 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.169 [2024-11-26 21:19:47.297355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.169 [2024-11-26 21:19:47.297581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:29.169 [2024-11-26 21:19:47.297648] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:29.169 request: 00:12:29.169 { 00:12:29.169 "base_bdev": "BaseBdev1", 00:12:29.169 "raid_bdev": "raid_bdev1", 00:12:29.169 "method": "bdev_raid_add_base_bdev", 00:12:29.169 "req_id": 1 00:12:29.169 } 00:12:29.169 Got JSON-RPC error response 00:12:29.169 response: 00:12:29.169 { 00:12:29.169 "code": -22, 00:12:29.169 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:29.169 } 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:29.169 21:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:30.548 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.549 "name": "raid_bdev1", 00:12:30.549 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:30.549 "strip_size_kb": 0, 00:12:30.549 "state": "online", 00:12:30.549 "raid_level": "raid1", 00:12:30.549 "superblock": true, 00:12:30.549 "num_base_bdevs": 2, 00:12:30.549 "num_base_bdevs_discovered": 1, 00:12:30.549 "num_base_bdevs_operational": 1, 00:12:30.549 "base_bdevs_list": [ 00:12:30.549 { 00:12:30.549 "name": null, 00:12:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.549 "is_configured": false, 00:12:30.549 "data_offset": 0, 00:12:30.549 "data_size": 63488 00:12:30.549 }, 00:12:30.549 { 00:12:30.549 "name": "BaseBdev2", 00:12:30.549 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:30.549 "is_configured": true, 00:12:30.549 "data_offset": 2048, 00:12:30.549 "data_size": 63488 00:12:30.549 } 00:12:30.549 ] 00:12:30.549 }' 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.549 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.808 "name": "raid_bdev1", 00:12:30.808 "uuid": "a3e91903-e260-429d-ab3d-bfc25c22b44f", 00:12:30.808 "strip_size_kb": 0, 00:12:30.808 "state": "online", 00:12:30.808 "raid_level": "raid1", 00:12:30.808 "superblock": true, 00:12:30.808 "num_base_bdevs": 2, 00:12:30.808 "num_base_bdevs_discovered": 1, 00:12:30.808 "num_base_bdevs_operational": 1, 00:12:30.808 "base_bdevs_list": [ 00:12:30.808 { 00:12:30.808 "name": null, 00:12:30.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.808 "is_configured": false, 00:12:30.808 "data_offset": 0, 00:12:30.808 "data_size": 63488 00:12:30.808 }, 00:12:30.808 { 00:12:30.808 "name": "BaseBdev2", 00:12:30.808 "uuid": "7d330c8f-afc3-5520-b77e-58f9ff0636ab", 00:12:30.808 "is_configured": true, 00:12:30.808 "data_offset": 2048, 00:12:30.808 "data_size": 63488 00:12:30.808 } 00:12:30.808 ] 00:12:30.808 }' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75529 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75529 ']' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75529 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75529 00:12:30.808 killing process with pid 75529 00:12:30.808 Received shutdown signal, test time was about 60.000000 seconds 00:12:30.808 00:12:30.808 Latency(us) 00:12:30.808 [2024-11-26T21:19:48.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.808 [2024-11-26T21:19:48.964Z] =================================================================================================================== 00:12:30.808 [2024-11-26T21:19:48.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75529' 00:12:30.808 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75529 00:12:30.808 [2024-11-26 21:19:48.906875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.808 [2024-11-26 21:19:48.907015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.809 21:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75529 00:12:30.809 [2024-11-26 21:19:48.907066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.809 [2024-11-26 21:19:48.907079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:31.067 [2024-11-26 21:19:49.206080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:32.483 00:12:32.483 real 0m23.062s 00:12:32.483 user 0m28.190s 00:12:32.483 sys 0m3.560s 00:12:32.483 ************************************ 00:12:32.483 END TEST raid_rebuild_test_sb 00:12:32.483 ************************************ 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.483 21:19:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:32.483 21:19:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:32.483 21:19:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.483 21:19:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.483 ************************************ 00:12:32.483 START TEST raid_rebuild_test_io 00:12:32.483 ************************************ 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76259 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76259 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76259 ']' 00:12:32.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.483 21:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.483 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.483 Zero copy mechanism will not be used. 00:12:32.483 [2024-11-26 21:19:50.468984] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:32.483 [2024-11-26 21:19:50.469177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76259 ] 00:12:32.744 [2024-11-26 21:19:50.643868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.744 [2024-11-26 21:19:50.751758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.004 [2024-11-26 21:19:50.939339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.004 [2024-11-26 21:19:50.939455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.264 BaseBdev1_malloc 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.264 [2024-11-26 21:19:51.337219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.264 [2024-11-26 21:19:51.337328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.264 [2024-11-26 21:19:51.337354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:33.264 [2024-11-26 21:19:51.337365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.264 [2024-11-26 21:19:51.339461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.264 [2024-11-26 21:19:51.339501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.264 BaseBdev1 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.264 BaseBdev2_malloc 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.264 [2024-11-26 21:19:51.391221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:33.264 [2024-11-26 21:19:51.391359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.264 [2024-11-26 21:19:51.391387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:33.264 [2024-11-26 21:19:51.391401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.264 [2024-11-26 21:19:51.393566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.264 [2024-11-26 21:19:51.393604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.264 BaseBdev2 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.264 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.525 spare_malloc 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.525 spare_delay 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.525 [2024-11-26 21:19:51.478527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.525 [2024-11-26 21:19:51.478587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.525 [2024-11-26 21:19:51.478623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:33.525 [2024-11-26 21:19:51.478633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.525 [2024-11-26 21:19:51.480700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.525 [2024-11-26 21:19:51.480743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.525 spare 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.525 [2024-11-26 21:19:51.490556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.525 [2024-11-26 21:19:51.492315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.525 [2024-11-26 21:19:51.492405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:33.525 [2024-11-26 21:19:51.492420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:33.525 [2024-11-26 21:19:51.492671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:33.525 [2024-11-26 21:19:51.492824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:33.525 [2024-11-26 21:19:51.492848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:33.525 [2024-11-26 21:19:51.493016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.525 "name": "raid_bdev1", 00:12:33.525 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:33.525 "strip_size_kb": 0, 00:12:33.525 "state": "online", 00:12:33.525 "raid_level": "raid1", 00:12:33.525 "superblock": false, 00:12:33.525 "num_base_bdevs": 2, 00:12:33.525 "num_base_bdevs_discovered": 2, 00:12:33.525 "num_base_bdevs_operational": 2, 00:12:33.526 "base_bdevs_list": [ 00:12:33.526 { 00:12:33.526 "name": "BaseBdev1", 00:12:33.526 "uuid": "e2d1b518-f20b-554d-895c-259919588402", 00:12:33.526 "is_configured": true, 00:12:33.526 "data_offset": 0, 00:12:33.526 "data_size": 65536 00:12:33.526 }, 00:12:33.526 { 00:12:33.526 "name": "BaseBdev2", 00:12:33.526 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:33.526 "is_configured": true, 00:12:33.526 "data_offset": 0, 00:12:33.526 "data_size": 65536 00:12:33.526 } 00:12:33.526 ] 00:12:33.526 }' 00:12:33.526 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.526 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.097 [2024-11-26 21:19:51.954078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.097 21:19:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.097 [2024-11-26 21:19:52.017608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.097 "name": "raid_bdev1", 00:12:34.097 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:34.097 "strip_size_kb": 0, 00:12:34.097 "state": "online", 00:12:34.097 "raid_level": "raid1", 00:12:34.097 "superblock": false, 00:12:34.097 "num_base_bdevs": 2, 00:12:34.097 "num_base_bdevs_discovered": 1, 00:12:34.097 "num_base_bdevs_operational": 1, 00:12:34.097 "base_bdevs_list": [ 00:12:34.097 { 00:12:34.097 "name": null, 00:12:34.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.097 "is_configured": false, 00:12:34.097 "data_offset": 0, 00:12:34.097 "data_size": 65536 00:12:34.097 }, 00:12:34.097 { 00:12:34.097 "name": "BaseBdev2", 00:12:34.097 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:34.097 "is_configured": true, 00:12:34.097 "data_offset": 0, 00:12:34.097 "data_size": 65536 00:12:34.097 } 00:12:34.097 ] 00:12:34.097 }' 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.097 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.097 [2024-11-26 21:19:52.113422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:34.097 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:34.097 Zero copy mechanism will not be used. 00:12:34.097 Running I/O for 60 seconds... 00:12:34.358 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:34.358 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.358 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.358 [2024-11-26 21:19:52.499513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.618 21:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.619 21:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:34.619 [2024-11-26 21:19:52.551050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:34.619 [2024-11-26 21:19:52.552900] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.619 [2024-11-26 21:19:52.659977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:34.619 [2024-11-26 21:19:52.660492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:34.879 [2024-11-26 21:19:52.892091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.879 [2024-11-26 21:19:52.892442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:35.139 173.00 IOPS, 519.00 MiB/s [2024-11-26T21:19:53.295Z] [2024-11-26 21:19:53.222369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:35.400 [2024-11-26 21:19:53.431089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.400 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.660 "name": "raid_bdev1", 00:12:35.660 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:35.660 "strip_size_kb": 0, 00:12:35.660 "state": "online", 00:12:35.660 "raid_level": "raid1", 00:12:35.660 "superblock": false, 00:12:35.660 "num_base_bdevs": 2, 00:12:35.660 "num_base_bdevs_discovered": 2, 00:12:35.660 "num_base_bdevs_operational": 2, 00:12:35.660 "process": { 00:12:35.660 "type": "rebuild", 00:12:35.660 "target": "spare", 00:12:35.660 "progress": { 00:12:35.660 "blocks": 10240, 00:12:35.660 "percent": 15 00:12:35.660 } 00:12:35.660 }, 00:12:35.660 "base_bdevs_list": [ 00:12:35.660 { 00:12:35.660 "name": "spare", 00:12:35.660 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:35.660 "is_configured": true, 00:12:35.660 "data_offset": 0, 00:12:35.660 "data_size": 65536 00:12:35.660 }, 00:12:35.660 { 00:12:35.660 "name": "BaseBdev2", 00:12:35.660 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:35.660 "is_configured": true, 00:12:35.660 "data_offset": 0, 00:12:35.660 "data_size": 65536 00:12:35.660 } 00:12:35.660 ] 00:12:35.660 }' 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.660 [2024-11-26 21:19:53.660140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.660 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.660 [2024-11-26 21:19:53.692359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.660 [2024-11-26 21:19:53.767661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:35.660 [2024-11-26 21:19:53.768136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:35.920 [2024-11-26 21:19:53.875382] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.920 [2024-11-26 21:19:53.878175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.920 [2024-11-26 21:19:53.878218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.920 [2024-11-26 21:19:53.878230] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.920 [2024-11-26 21:19:53.909335] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.920 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.920 "name": "raid_bdev1", 00:12:35.920 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:35.920 "strip_size_kb": 0, 00:12:35.920 "state": "online", 00:12:35.920 "raid_level": "raid1", 00:12:35.920 "superblock": false, 00:12:35.920 "num_base_bdevs": 2, 00:12:35.920 "num_base_bdevs_discovered": 1, 00:12:35.920 "num_base_bdevs_operational": 1, 00:12:35.920 "base_bdevs_list": [ 00:12:35.920 { 00:12:35.920 "name": null, 00:12:35.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.920 "is_configured": false, 00:12:35.920 "data_offset": 0, 00:12:35.921 "data_size": 65536 00:12:35.921 }, 00:12:35.921 { 00:12:35.921 "name": "BaseBdev2", 00:12:35.921 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:35.921 "is_configured": true, 00:12:35.921 "data_offset": 0, 00:12:35.921 "data_size": 65536 00:12:35.921 } 00:12:35.921 ] 00:12:35.921 }' 00:12:35.921 21:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.921 21:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.441 144.00 IOPS, 432.00 MiB/s [2024-11-26T21:19:54.597Z] 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.441 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.441 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.442 "name": "raid_bdev1", 00:12:36.442 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:36.442 "strip_size_kb": 0, 00:12:36.442 "state": "online", 00:12:36.442 "raid_level": "raid1", 00:12:36.442 "superblock": false, 00:12:36.442 "num_base_bdevs": 2, 00:12:36.442 "num_base_bdevs_discovered": 1, 00:12:36.442 "num_base_bdevs_operational": 1, 00:12:36.442 "base_bdevs_list": [ 00:12:36.442 { 00:12:36.442 "name": null, 00:12:36.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.442 "is_configured": false, 00:12:36.442 "data_offset": 0, 00:12:36.442 "data_size": 65536 00:12:36.442 }, 00:12:36.442 { 00:12:36.442 "name": "BaseBdev2", 00:12:36.442 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:36.442 "is_configured": true, 00:12:36.442 "data_offset": 0, 00:12:36.442 "data_size": 65536 00:12:36.442 } 00:12:36.442 ] 00:12:36.442 }' 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.442 [2024-11-26 21:19:54.555186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.442 21:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:36.702 [2024-11-26 21:19:54.599534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:36.702 [2024-11-26 21:19:54.601519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.702 [2024-11-26 21:19:54.715089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:36.702 [2024-11-26 21:19:54.715744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:36.702 [2024-11-26 21:19:54.846706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.531 174.00 IOPS, 522.00 MiB/s [2024-11-26T21:19:55.687Z] [2024-11-26 21:19:55.523349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:37.531 [2024-11-26 21:19:55.523790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.531 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.531 "name": "raid_bdev1", 00:12:37.531 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:37.531 "strip_size_kb": 0, 00:12:37.531 "state": "online", 00:12:37.531 "raid_level": "raid1", 00:12:37.531 "superblock": false, 00:12:37.531 "num_base_bdevs": 2, 00:12:37.531 "num_base_bdevs_discovered": 2, 00:12:37.531 "num_base_bdevs_operational": 2, 00:12:37.531 "process": { 00:12:37.531 "type": "rebuild", 00:12:37.531 "target": "spare", 00:12:37.531 "progress": { 00:12:37.531 "blocks": 16384, 00:12:37.531 "percent": 25 00:12:37.531 } 00:12:37.531 }, 00:12:37.531 "base_bdevs_list": [ 00:12:37.531 { 00:12:37.531 "name": "spare", 00:12:37.531 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:37.531 "is_configured": true, 00:12:37.531 "data_offset": 0, 00:12:37.531 "data_size": 65536 00:12:37.531 }, 00:12:37.531 { 00:12:37.531 "name": "BaseBdev2", 00:12:37.531 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:37.532 "is_configured": true, 00:12:37.532 "data_offset": 0, 00:12:37.532 "data_size": 65536 00:12:37.532 } 00:12:37.532 ] 00:12:37.532 }' 00:12:37.532 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.791 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.791 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.791 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.792 [2024-11-26 21:19:55.750142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.792 "name": "raid_bdev1", 00:12:37.792 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:37.792 "strip_size_kb": 0, 00:12:37.792 "state": "online", 00:12:37.792 "raid_level": "raid1", 00:12:37.792 "superblock": false, 00:12:37.792 "num_base_bdevs": 2, 00:12:37.792 "num_base_bdevs_discovered": 2, 00:12:37.792 "num_base_bdevs_operational": 2, 00:12:37.792 "process": { 00:12:37.792 "type": "rebuild", 00:12:37.792 "target": "spare", 00:12:37.792 "progress": { 00:12:37.792 "blocks": 20480, 00:12:37.792 "percent": 31 00:12:37.792 } 00:12:37.792 }, 00:12:37.792 "base_bdevs_list": [ 00:12:37.792 { 00:12:37.792 "name": "spare", 00:12:37.792 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:37.792 "is_configured": true, 00:12:37.792 "data_offset": 0, 00:12:37.792 "data_size": 65536 00:12:37.792 }, 00:12:37.792 { 00:12:37.792 "name": "BaseBdev2", 00:12:37.792 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:37.792 "is_configured": true, 00:12:37.792 "data_offset": 0, 00:12:37.792 "data_size": 65536 00:12:37.792 } 00:12:37.792 ] 00:12:37.792 }' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.792 21:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.051 [2024-11-26 21:19:56.069904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:38.052 155.00 IOPS, 465.00 MiB/s [2024-11-26T21:19:56.208Z] [2024-11-26 21:19:56.181528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:38.312 [2024-11-26 21:19:56.403727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:38.572 [2024-11-26 21:19:56.525587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:38.833 [2024-11-26 21:19:56.861872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.833 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.833 "name": "raid_bdev1", 00:12:38.833 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:38.833 "strip_size_kb": 0, 00:12:38.833 "state": "online", 00:12:38.833 "raid_level": "raid1", 00:12:38.833 "superblock": false, 00:12:38.833 "num_base_bdevs": 2, 00:12:38.833 "num_base_bdevs_discovered": 2, 00:12:38.833 "num_base_bdevs_operational": 2, 00:12:38.833 "process": { 00:12:38.833 "type": "rebuild", 00:12:38.833 "target": "spare", 00:12:38.833 "progress": { 00:12:38.833 "blocks": 38912, 00:12:38.833 "percent": 59 00:12:38.833 } 00:12:38.833 }, 00:12:38.833 "base_bdevs_list": [ 00:12:38.833 { 00:12:38.833 "name": "spare", 00:12:38.833 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:38.833 "is_configured": true, 00:12:38.833 "data_offset": 0, 00:12:38.833 "data_size": 65536 00:12:38.833 }, 00:12:38.833 { 00:12:38.833 "name": "BaseBdev2", 00:12:38.833 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:38.833 "is_configured": true, 00:12:38.833 "data_offset": 0, 00:12:38.833 "data_size": 65536 00:12:38.833 } 00:12:38.833 ] 00:12:38.834 }' 00:12:38.834 21:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.111 21:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.111 21:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.111 21:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.111 21:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.111 [2024-11-26 21:19:57.070543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:39.111 [2024-11-26 21:19:57.070955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:39.370 132.60 IOPS, 397.80 MiB/s [2024-11-26T21:19:57.526Z] [2024-11-26 21:19:57.513479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:39.938 [2024-11-26 21:19:57.840823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:39.938 [2024-11-26 21:19:58.053014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:39.938 [2024-11-26 21:19:58.053405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.938 21:19:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.198 115.67 IOPS, 347.00 MiB/s [2024-11-26T21:19:58.354Z] 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.198 "name": "raid_bdev1", 00:12:40.198 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:40.198 "strip_size_kb": 0, 00:12:40.198 "state": "online", 00:12:40.198 "raid_level": "raid1", 00:12:40.198 "superblock": false, 00:12:40.198 "num_base_bdevs": 2, 00:12:40.198 "num_base_bdevs_discovered": 2, 00:12:40.198 "num_base_bdevs_operational": 2, 00:12:40.198 "process": { 00:12:40.198 "type": "rebuild", 00:12:40.198 "target": "spare", 00:12:40.198 "progress": { 00:12:40.198 "blocks": 53248, 00:12:40.198 "percent": 81 00:12:40.198 } 00:12:40.198 }, 00:12:40.198 "base_bdevs_list": [ 00:12:40.198 { 00:12:40.198 "name": "spare", 00:12:40.198 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:40.198 "is_configured": true, 00:12:40.198 "data_offset": 0, 00:12:40.198 "data_size": 65536 00:12:40.198 }, 00:12:40.198 { 00:12:40.198 "name": "BaseBdev2", 00:12:40.198 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:40.198 "is_configured": true, 00:12:40.198 "data_offset": 0, 00:12:40.198 "data_size": 65536 00:12:40.198 } 00:12:40.198 ] 00:12:40.198 }' 00:12:40.199 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.199 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.199 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.199 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.199 21:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.458 [2024-11-26 21:19:58.371785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:40.717 [2024-11-26 21:19:58.775765] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:40.717 [2024-11-26 21:19:58.807497] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:40.717 [2024-11-26 21:19:58.809828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.236 103.71 IOPS, 311.14 MiB/s [2024-11-26T21:19:59.393Z] 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.237 "name": "raid_bdev1", 00:12:41.237 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:41.237 "strip_size_kb": 0, 00:12:41.237 "state": "online", 00:12:41.237 "raid_level": "raid1", 00:12:41.237 "superblock": false, 00:12:41.237 "num_base_bdevs": 2, 00:12:41.237 "num_base_bdevs_discovered": 2, 00:12:41.237 "num_base_bdevs_operational": 2, 00:12:41.237 "base_bdevs_list": [ 00:12:41.237 { 00:12:41.237 "name": "spare", 00:12:41.237 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:41.237 "is_configured": true, 00:12:41.237 "data_offset": 0, 00:12:41.237 "data_size": 65536 00:12:41.237 }, 00:12:41.237 { 00:12:41.237 "name": "BaseBdev2", 00:12:41.237 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:41.237 "is_configured": true, 00:12:41.237 "data_offset": 0, 00:12:41.237 "data_size": 65536 00:12:41.237 } 00:12:41.237 ] 00:12:41.237 }' 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.237 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.497 "name": "raid_bdev1", 00:12:41.497 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:41.497 "strip_size_kb": 0, 00:12:41.497 "state": "online", 00:12:41.497 "raid_level": "raid1", 00:12:41.497 "superblock": false, 00:12:41.497 "num_base_bdevs": 2, 00:12:41.497 "num_base_bdevs_discovered": 2, 00:12:41.497 "num_base_bdevs_operational": 2, 00:12:41.497 "base_bdevs_list": [ 00:12:41.497 { 00:12:41.497 "name": "spare", 00:12:41.497 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:41.497 "is_configured": true, 00:12:41.497 "data_offset": 0, 00:12:41.497 "data_size": 65536 00:12:41.497 }, 00:12:41.497 { 00:12:41.497 "name": "BaseBdev2", 00:12:41.497 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:41.497 "is_configured": true, 00:12:41.497 "data_offset": 0, 00:12:41.497 "data_size": 65536 00:12:41.497 } 00:12:41.497 ] 00:12:41.497 }' 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.497 "name": "raid_bdev1", 00:12:41.497 "uuid": "938df90a-1899-4ebf-8d62-092e2aeb899c", 00:12:41.497 "strip_size_kb": 0, 00:12:41.497 "state": "online", 00:12:41.497 "raid_level": "raid1", 00:12:41.497 "superblock": false, 00:12:41.497 "num_base_bdevs": 2, 00:12:41.497 "num_base_bdevs_discovered": 2, 00:12:41.497 "num_base_bdevs_operational": 2, 00:12:41.497 "base_bdevs_list": [ 00:12:41.497 { 00:12:41.497 "name": "spare", 00:12:41.497 "uuid": "9d0243bc-98aa-56c0-8624-d32d92176b2d", 00:12:41.497 "is_configured": true, 00:12:41.497 "data_offset": 0, 00:12:41.497 "data_size": 65536 00:12:41.497 }, 00:12:41.497 { 00:12:41.497 "name": "BaseBdev2", 00:12:41.497 "uuid": "d0e235dd-149a-518d-a14b-14d023c5d4a4", 00:12:41.497 "is_configured": true, 00:12:41.497 "data_offset": 0, 00:12:41.497 "data_size": 65536 00:12:41.497 } 00:12:41.497 ] 00:12:41.497 }' 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.497 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.067 21:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.067 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.067 21:19:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.067 [2024-11-26 21:19:59.924262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.067 [2024-11-26 21:19:59.924294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.067 00:12:42.067 Latency(us) 00:12:42.067 [2024-11-26T21:20:00.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.067 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:42.067 raid_bdev1 : 7.92 96.85 290.54 0.00 0.00 13772.41 298.70 113557.58 00:12:42.067 [2024-11-26T21:20:00.223Z] =================================================================================================================== 00:12:42.067 [2024-11-26T21:20:00.223Z] Total : 96.85 290.54 0.00 0.00 13772.41 298.70 113557.58 00:12:42.067 [2024-11-26 21:20:00.040779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.067 [2024-11-26 21:20:00.040835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.067 [2024-11-26 21:20:00.040902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.067 [2024-11-26 21:20:00.040917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:42.067 { 00:12:42.067 "results": [ 00:12:42.068 { 00:12:42.068 "job": "raid_bdev1", 00:12:42.068 "core_mask": "0x1", 00:12:42.068 "workload": "randrw", 00:12:42.068 "percentage": 50, 00:12:42.068 "status": "finished", 00:12:42.068 "queue_depth": 2, 00:12:42.068 "io_size": 3145728, 00:12:42.068 "runtime": 7.919796, 00:12:42.068 "iops": 96.84592885978377, 00:12:42.068 "mibps": 290.5377865793513, 00:12:42.068 "io_failed": 0, 00:12:42.068 "io_timeout": 0, 00:12:42.068 "avg_latency_us": 13772.409681000667, 00:12:42.068 "min_latency_us": 298.70393013100437, 00:12:42.068 "max_latency_us": 113557.57554585153 00:12:42.068 } 00:12:42.068 ], 00:12:42.068 "core_count": 1 00:12:42.068 } 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.068 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:42.328 /dev/nbd0 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.328 1+0 records in 00:12:42.328 1+0 records out 00:12:42.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369023 s, 11.1 MB/s 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.328 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:42.587 /dev/nbd1 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.587 1+0 records in 00:12:42.587 1+0 records out 00:12:42.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490732 s, 8.3 MB/s 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.587 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.848 21:20:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76259 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76259 ']' 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76259 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76259 00:12:43.108 killing process with pid 76259 00:12:43.108 Received shutdown signal, test time was about 9.163530 seconds 00:12:43.108 00:12:43.108 Latency(us) 00:12:43.108 [2024-11-26T21:20:01.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.108 [2024-11-26T21:20:01.264Z] =================================================================================================================== 00:12:43.108 [2024-11-26T21:20:01.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76259' 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76259 00:12:43.108 [2024-11-26 21:20:01.261358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.108 21:20:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76259 00:12:43.367 [2024-11-26 21:20:01.486070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:44.748 00:12:44.748 real 0m12.244s 00:12:44.748 user 0m15.352s 00:12:44.748 sys 0m1.543s 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.748 ************************************ 00:12:44.748 END TEST raid_rebuild_test_io 00:12:44.748 ************************************ 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.748 21:20:02 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:44.748 21:20:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:44.748 21:20:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.748 21:20:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.748 ************************************ 00:12:44.748 START TEST raid_rebuild_test_sb_io 00:12:44.748 ************************************ 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76636 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76636 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76636 ']' 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.748 21:20:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.748 [2024-11-26 21:20:02.782730] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:44.748 [2024-11-26 21:20:02.782955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76636 ] 00:12:44.748 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:44.748 Zero copy mechanism will not be used. 00:12:45.007 [2024-11-26 21:20:02.953381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.007 [2024-11-26 21:20:03.061500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.267 [2024-11-26 21:20:03.252605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.267 [2024-11-26 21:20:03.252745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 BaseBdev1_malloc 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 [2024-11-26 21:20:03.659737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:45.527 [2024-11-26 21:20:03.659805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.527 [2024-11-26 21:20:03.659828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.527 [2024-11-26 21:20:03.659839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.527 [2024-11-26 21:20:03.662094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.527 [2024-11-26 21:20:03.662139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.527 BaseBdev1 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.527 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 BaseBdev2_malloc 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 [2024-11-26 21:20:03.712190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:45.787 [2024-11-26 21:20:03.712252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.787 [2024-11-26 21:20:03.712275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.787 [2024-11-26 21:20:03.712285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.787 [2024-11-26 21:20:03.714338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.787 [2024-11-26 21:20:03.714378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.787 BaseBdev2 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 spare_malloc 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 spare_delay 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 [2024-11-26 21:20:03.793958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.787 [2024-11-26 21:20:03.794056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.787 [2024-11-26 21:20:03.794082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:45.787 [2024-11-26 21:20:03.794093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.787 [2024-11-26 21:20:03.796326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.787 [2024-11-26 21:20:03.796370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.787 spare 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.787 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.787 [2024-11-26 21:20:03.805934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.787 [2024-11-26 21:20:03.807656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.787 [2024-11-26 21:20:03.807839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:45.787 [2024-11-26 21:20:03.807855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.787 [2024-11-26 21:20:03.808122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.787 [2024-11-26 21:20:03.808301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:45.787 [2024-11-26 21:20:03.808315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:45.788 [2024-11-26 21:20:03.808466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.788 "name": "raid_bdev1", 00:12:45.788 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:45.788 "strip_size_kb": 0, 00:12:45.788 "state": "online", 00:12:45.788 "raid_level": "raid1", 00:12:45.788 "superblock": true, 00:12:45.788 "num_base_bdevs": 2, 00:12:45.788 "num_base_bdevs_discovered": 2, 00:12:45.788 "num_base_bdevs_operational": 2, 00:12:45.788 "base_bdevs_list": [ 00:12:45.788 { 00:12:45.788 "name": "BaseBdev1", 00:12:45.788 "uuid": "89739731-03e1-57d8-a01e-cd719b2ad2e1", 00:12:45.788 "is_configured": true, 00:12:45.788 "data_offset": 2048, 00:12:45.788 "data_size": 63488 00:12:45.788 }, 00:12:45.788 { 00:12:45.788 "name": "BaseBdev2", 00:12:45.788 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:45.788 "is_configured": true, 00:12:45.788 "data_offset": 2048, 00:12:45.788 "data_size": 63488 00:12:45.788 } 00:12:45.788 ] 00:12:45.788 }' 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.788 21:20:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:46.354 [2024-11-26 21:20:04.293405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.354 [2024-11-26 21:20:04.384943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.354 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.355 "name": "raid_bdev1", 00:12:46.355 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:46.355 "strip_size_kb": 0, 00:12:46.355 "state": "online", 00:12:46.355 "raid_level": "raid1", 00:12:46.355 "superblock": true, 00:12:46.355 "num_base_bdevs": 2, 00:12:46.355 "num_base_bdevs_discovered": 1, 00:12:46.355 "num_base_bdevs_operational": 1, 00:12:46.355 "base_bdevs_list": [ 00:12:46.355 { 00:12:46.355 "name": null, 00:12:46.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.355 "is_configured": false, 00:12:46.355 "data_offset": 0, 00:12:46.355 "data_size": 63488 00:12:46.355 }, 00:12:46.355 { 00:12:46.355 "name": "BaseBdev2", 00:12:46.355 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:46.355 "is_configured": true, 00:12:46.355 "data_offset": 2048, 00:12:46.355 "data_size": 63488 00:12:46.355 } 00:12:46.355 ] 00:12:46.355 }' 00:12:46.355 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.355 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.355 [2024-11-26 21:20:04.481030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:46.355 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:46.355 Zero copy mechanism will not be used. 00:12:46.355 Running I/O for 60 seconds... 00:12:46.922 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.922 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.922 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.922 [2024-11-26 21:20:04.806688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.922 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.922 21:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:46.922 [2024-11-26 21:20:04.851371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:46.922 [2024-11-26 21:20:04.853196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.922 [2024-11-26 21:20:04.964369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.922 [2024-11-26 21:20:04.964811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:47.185 [2024-11-26 21:20:05.185459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:47.185 [2024-11-26 21:20:05.185800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:47.455 170.00 IOPS, 510.00 MiB/s [2024-11-26T21:20:05.611Z] [2024-11-26 21:20:05.517344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:47.713 [2024-11-26 21:20:05.730893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:47.713 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.713 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.713 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.714 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.714 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.714 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.714 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.714 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.714 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.974 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.974 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.974 "name": "raid_bdev1", 00:12:47.974 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:47.974 "strip_size_kb": 0, 00:12:47.974 "state": "online", 00:12:47.974 "raid_level": "raid1", 00:12:47.974 "superblock": true, 00:12:47.974 "num_base_bdevs": 2, 00:12:47.974 "num_base_bdevs_discovered": 2, 00:12:47.974 "num_base_bdevs_operational": 2, 00:12:47.974 "process": { 00:12:47.974 "type": "rebuild", 00:12:47.974 "target": "spare", 00:12:47.974 "progress": { 00:12:47.974 "blocks": 10240, 00:12:47.974 "percent": 16 00:12:47.974 } 00:12:47.974 }, 00:12:47.974 "base_bdevs_list": [ 00:12:47.974 { 00:12:47.974 "name": "spare", 00:12:47.974 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:47.974 "is_configured": true, 00:12:47.974 "data_offset": 2048, 00:12:47.974 "data_size": 63488 00:12:47.974 }, 00:12:47.974 { 00:12:47.974 "name": "BaseBdev2", 00:12:47.974 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:47.974 "is_configured": true, 00:12:47.974 "data_offset": 2048, 00:12:47.974 "data_size": 63488 00:12:47.974 } 00:12:47.974 ] 00:12:47.974 }' 00:12:47.974 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.974 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.974 21:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.974 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.974 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:47.974 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.974 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.974 [2024-11-26 21:20:06.007182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.974 [2024-11-26 21:20:06.112541] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.974 [2024-11-26 21:20:06.126249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.974 [2024-11-26 21:20:06.126290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.974 [2024-11-26 21:20:06.126303] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.233 [2024-11-26 21:20:06.162641] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.233 "name": "raid_bdev1", 00:12:48.233 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:48.233 "strip_size_kb": 0, 00:12:48.233 "state": "online", 00:12:48.233 "raid_level": "raid1", 00:12:48.233 "superblock": true, 00:12:48.233 "num_base_bdevs": 2, 00:12:48.233 "num_base_bdevs_discovered": 1, 00:12:48.233 "num_base_bdevs_operational": 1, 00:12:48.233 "base_bdevs_list": [ 00:12:48.233 { 00:12:48.233 "name": null, 00:12:48.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.233 "is_configured": false, 00:12:48.233 "data_offset": 0, 00:12:48.233 "data_size": 63488 00:12:48.233 }, 00:12:48.233 { 00:12:48.233 "name": "BaseBdev2", 00:12:48.233 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:48.233 "is_configured": true, 00:12:48.233 "data_offset": 2048, 00:12:48.233 "data_size": 63488 00:12:48.233 } 00:12:48.233 ] 00:12:48.233 }' 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.233 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.752 161.50 IOPS, 484.50 MiB/s [2024-11-26T21:20:06.908Z] 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.752 "name": "raid_bdev1", 00:12:48.752 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:48.752 "strip_size_kb": 0, 00:12:48.752 "state": "online", 00:12:48.752 "raid_level": "raid1", 00:12:48.752 "superblock": true, 00:12:48.752 "num_base_bdevs": 2, 00:12:48.752 "num_base_bdevs_discovered": 1, 00:12:48.752 "num_base_bdevs_operational": 1, 00:12:48.752 "base_bdevs_list": [ 00:12:48.752 { 00:12:48.752 "name": null, 00:12:48.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.752 "is_configured": false, 00:12:48.752 "data_offset": 0, 00:12:48.752 "data_size": 63488 00:12:48.752 }, 00:12:48.752 { 00:12:48.752 "name": "BaseBdev2", 00:12:48.752 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:48.752 "is_configured": true, 00:12:48.752 "data_offset": 2048, 00:12:48.752 "data_size": 63488 00:12:48.752 } 00:12:48.752 ] 00:12:48.752 }' 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.752 [2024-11-26 21:20:06.808823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.752 21:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:48.752 [2024-11-26 21:20:06.868658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:48.752 [2024-11-26 21:20:06.870480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.011 [2024-11-26 21:20:06.983011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:49.011 [2024-11-26 21:20:06.983556] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:49.011 [2024-11-26 21:20:07.103188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.011 [2024-11-26 21:20:07.103624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.579 [2024-11-26 21:20:07.428143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:49.579 [2024-11-26 21:20:07.428771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:49.579 160.00 IOPS, 480.00 MiB/s [2024-11-26T21:20:07.735Z] [2024-11-26 21:20:07.643612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:49.579 [2024-11-26 21:20:07.644004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.839 "name": "raid_bdev1", 00:12:49.839 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:49.839 "strip_size_kb": 0, 00:12:49.839 "state": "online", 00:12:49.839 "raid_level": "raid1", 00:12:49.839 "superblock": true, 00:12:49.839 "num_base_bdevs": 2, 00:12:49.839 "num_base_bdevs_discovered": 2, 00:12:49.839 "num_base_bdevs_operational": 2, 00:12:49.839 "process": { 00:12:49.839 "type": "rebuild", 00:12:49.839 "target": "spare", 00:12:49.839 "progress": { 00:12:49.839 "blocks": 12288, 00:12:49.839 "percent": 19 00:12:49.839 } 00:12:49.839 }, 00:12:49.839 "base_bdevs_list": [ 00:12:49.839 { 00:12:49.839 "name": "spare", 00:12:49.839 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:49.839 "is_configured": true, 00:12:49.839 "data_offset": 2048, 00:12:49.839 "data_size": 63488 00:12:49.839 }, 00:12:49.839 { 00:12:49.839 "name": "BaseBdev2", 00:12:49.839 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:49.839 "is_configured": true, 00:12:49.839 "data_offset": 2048, 00:12:49.839 "data_size": 63488 00:12:49.839 } 00:12:49.839 ] 00:12:49.839 }' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.839 [2024-11-26 21:20:07.966091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.839 [2024-11-26 21:20:07.966662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:49.839 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.839 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.099 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.099 21:20:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.099 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.099 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.099 "name": "raid_bdev1", 00:12:50.099 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:50.099 "strip_size_kb": 0, 00:12:50.099 "state": "online", 00:12:50.099 "raid_level": "raid1", 00:12:50.099 "superblock": true, 00:12:50.099 "num_base_bdevs": 2, 00:12:50.099 "num_base_bdevs_discovered": 2, 00:12:50.099 "num_base_bdevs_operational": 2, 00:12:50.099 "process": { 00:12:50.099 "type": "rebuild", 00:12:50.099 "target": "spare", 00:12:50.099 "progress": { 00:12:50.099 "blocks": 14336, 00:12:50.099 "percent": 22 00:12:50.099 } 00:12:50.099 }, 00:12:50.099 "base_bdevs_list": [ 00:12:50.099 { 00:12:50.099 "name": "spare", 00:12:50.099 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:50.099 "is_configured": true, 00:12:50.099 "data_offset": 2048, 00:12:50.099 "data_size": 63488 00:12:50.099 }, 00:12:50.099 { 00:12:50.099 "name": "BaseBdev2", 00:12:50.099 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:50.099 "is_configured": true, 00:12:50.099 "data_offset": 2048, 00:12:50.099 "data_size": 63488 00:12:50.099 } 00:12:50.099 ] 00:12:50.099 }' 00:12:50.099 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.099 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.099 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.099 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.100 21:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.359 [2024-11-26 21:20:08.305769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:50.618 141.75 IOPS, 425.25 MiB/s [2024-11-26T21:20:08.774Z] [2024-11-26 21:20:08.527161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:50.618 [2024-11-26 21:20:08.527586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:50.878 [2024-11-26 21:20:08.955024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.137 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.137 "name": "raid_bdev1", 00:12:51.137 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:51.137 "strip_size_kb": 0, 00:12:51.137 "state": "online", 00:12:51.137 "raid_level": "raid1", 00:12:51.137 "superblock": true, 00:12:51.137 "num_base_bdevs": 2, 00:12:51.138 "num_base_bdevs_discovered": 2, 00:12:51.138 "num_base_bdevs_operational": 2, 00:12:51.138 "process": { 00:12:51.138 "type": "rebuild", 00:12:51.138 "target": "spare", 00:12:51.138 "progress": { 00:12:51.138 "blocks": 28672, 00:12:51.138 "percent": 45 00:12:51.138 } 00:12:51.138 }, 00:12:51.138 "base_bdevs_list": [ 00:12:51.138 { 00:12:51.138 "name": "spare", 00:12:51.138 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:51.138 "is_configured": true, 00:12:51.138 "data_offset": 2048, 00:12:51.138 "data_size": 63488 00:12:51.138 }, 00:12:51.138 { 00:12:51.138 "name": "BaseBdev2", 00:12:51.138 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:51.138 "is_configured": true, 00:12:51.138 "data_offset": 2048, 00:12:51.138 "data_size": 63488 00:12:51.138 } 00:12:51.138 ] 00:12:51.138 }' 00:12:51.138 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.138 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.138 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.138 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.138 21:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.397 [2024-11-26 21:20:09.385820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:51.656 126.20 IOPS, 378.60 MiB/s [2024-11-26T21:20:09.812Z] [2024-11-26 21:20:09.596213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.225 "name": "raid_bdev1", 00:12:52.225 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:52.225 "strip_size_kb": 0, 00:12:52.225 "state": "online", 00:12:52.225 "raid_level": "raid1", 00:12:52.225 "superblock": true, 00:12:52.225 "num_base_bdevs": 2, 00:12:52.225 "num_base_bdevs_discovered": 2, 00:12:52.225 "num_base_bdevs_operational": 2, 00:12:52.225 "process": { 00:12:52.225 "type": "rebuild", 00:12:52.225 "target": "spare", 00:12:52.225 "progress": { 00:12:52.225 "blocks": 47104, 00:12:52.225 "percent": 74 00:12:52.225 } 00:12:52.225 }, 00:12:52.225 "base_bdevs_list": [ 00:12:52.225 { 00:12:52.225 "name": "spare", 00:12:52.225 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:52.225 "is_configured": true, 00:12:52.225 "data_offset": 2048, 00:12:52.225 "data_size": 63488 00:12:52.225 }, 00:12:52.225 { 00:12:52.225 "name": "BaseBdev2", 00:12:52.225 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:52.225 "is_configured": true, 00:12:52.225 "data_offset": 2048, 00:12:52.225 "data_size": 63488 00:12:52.225 } 00:12:52.225 ] 00:12:52.225 }' 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.225 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.485 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.485 21:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.052 111.50 IOPS, 334.50 MiB/s [2024-11-26T21:20:11.208Z] [2024-11-26 21:20:11.021058] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:53.052 [2024-11-26 21:20:11.120836] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:53.052 [2024-11-26 21:20:11.124422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.312 "name": "raid_bdev1", 00:12:53.312 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:53.312 "strip_size_kb": 0, 00:12:53.312 "state": "online", 00:12:53.312 "raid_level": "raid1", 00:12:53.312 "superblock": true, 00:12:53.312 "num_base_bdevs": 2, 00:12:53.312 "num_base_bdevs_discovered": 2, 00:12:53.312 "num_base_bdevs_operational": 2, 00:12:53.312 "base_bdevs_list": [ 00:12:53.312 { 00:12:53.312 "name": "spare", 00:12:53.312 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:53.312 "is_configured": true, 00:12:53.312 "data_offset": 2048, 00:12:53.312 "data_size": 63488 00:12:53.312 }, 00:12:53.312 { 00:12:53.312 "name": "BaseBdev2", 00:12:53.312 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:53.312 "is_configured": true, 00:12:53.312 "data_offset": 2048, 00:12:53.312 "data_size": 63488 00:12:53.312 } 00:12:53.312 ] 00:12:53.312 }' 00:12:53.312 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.572 100.43 IOPS, 301.29 MiB/s [2024-11-26T21:20:11.728Z] 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.572 "name": "raid_bdev1", 00:12:53.572 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:53.572 "strip_size_kb": 0, 00:12:53.572 "state": "online", 00:12:53.572 "raid_level": "raid1", 00:12:53.572 "superblock": true, 00:12:53.572 "num_base_bdevs": 2, 00:12:53.572 "num_base_bdevs_discovered": 2, 00:12:53.572 "num_base_bdevs_operational": 2, 00:12:53.572 "base_bdevs_list": [ 00:12:53.572 { 00:12:53.572 "name": "spare", 00:12:53.572 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:53.572 "is_configured": true, 00:12:53.572 "data_offset": 2048, 00:12:53.572 "data_size": 63488 00:12:53.572 }, 00:12:53.572 { 00:12:53.572 "name": "BaseBdev2", 00:12:53.572 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:53.572 "is_configured": true, 00:12:53.572 "data_offset": 2048, 00:12:53.572 "data_size": 63488 00:12:53.572 } 00:12:53.572 ] 00:12:53.572 }' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.572 "name": "raid_bdev1", 00:12:53.572 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:53.572 "strip_size_kb": 0, 00:12:53.572 "state": "online", 00:12:53.572 "raid_level": "raid1", 00:12:53.572 "superblock": true, 00:12:53.572 "num_base_bdevs": 2, 00:12:53.572 "num_base_bdevs_discovered": 2, 00:12:53.572 "num_base_bdevs_operational": 2, 00:12:53.572 "base_bdevs_list": [ 00:12:53.572 { 00:12:53.572 "name": "spare", 00:12:53.572 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:53.572 "is_configured": true, 00:12:53.572 "data_offset": 2048, 00:12:53.572 "data_size": 63488 00:12:53.572 }, 00:12:53.572 { 00:12:53.572 "name": "BaseBdev2", 00:12:53.572 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:53.572 "is_configured": true, 00:12:53.572 "data_offset": 2048, 00:12:53.572 "data_size": 63488 00:12:53.572 } 00:12:53.572 ] 00:12:53.572 }' 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.572 21:20:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.143 [2024-11-26 21:20:12.104501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.143 [2024-11-26 21:20:12.104650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.143 00:12:54.143 Latency(us) 00:12:54.143 [2024-11-26T21:20:12.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.143 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:54.143 raid_bdev1 : 7.74 94.23 282.70 0.00 0.00 14580.89 289.76 118136.51 00:12:54.143 [2024-11-26T21:20:12.299Z] =================================================================================================================== 00:12:54.143 [2024-11-26T21:20:12.299Z] Total : 94.23 282.70 0.00 0.00 14580.89 289.76 118136.51 00:12:54.143 [2024-11-26 21:20:12.225204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.143 [2024-11-26 21:20:12.225318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.143 [2024-11-26 21:20:12.225430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.143 [2024-11-26 21:20:12.225477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:54.143 { 00:12:54.143 "results": [ 00:12:54.143 { 00:12:54.143 "job": "raid_bdev1", 00:12:54.143 "core_mask": "0x1", 00:12:54.143 "workload": "randrw", 00:12:54.143 "percentage": 50, 00:12:54.143 "status": "finished", 00:12:54.143 "queue_depth": 2, 00:12:54.143 "io_size": 3145728, 00:12:54.143 "runtime": 7.736158, 00:12:54.143 "iops": 94.23282202871245, 00:12:54.143 "mibps": 282.69846608613733, 00:12:54.143 "io_failed": 0, 00:12:54.143 "io_timeout": 0, 00:12:54.143 "avg_latency_us": 14580.893647456285, 00:12:54.143 "min_latency_us": 289.7606986899563, 00:12:54.143 "max_latency_us": 118136.51004366812 00:12:54.143 } 00:12:54.143 ], 00:12:54.143 "core_count": 1 00:12:54.143 } 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.143 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:54.404 /dev/nbd0 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.404 1+0 records in 00:12:54.404 1+0 records out 00:12:54.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545103 s, 7.5 MB/s 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.404 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:54.679 /dev/nbd1 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.679 1+0 records in 00:12:54.679 1+0 records out 00:12:54.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278895 s, 14.7 MB/s 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.679 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.939 21:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.197 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 [2024-11-26 21:20:13.401576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.456 [2024-11-26 21:20:13.401635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.456 [2024-11-26 21:20:13.401659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:55.456 [2024-11-26 21:20:13.401668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.456 [2024-11-26 21:20:13.403884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.456 [2024-11-26 21:20:13.403924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.456 [2024-11-26 21:20:13.404023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:55.456 [2024-11-26 21:20:13.404084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.456 [2024-11-26 21:20:13.404234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.456 spare 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 [2024-11-26 21:20:13.504128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:55.456 [2024-11-26 21:20:13.504157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:55.456 [2024-11-26 21:20:13.504417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:55.456 [2024-11-26 21:20:13.504583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:55.456 [2024-11-26 21:20:13.504593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:55.456 [2024-11-26 21:20:13.504751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.456 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.457 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.457 "name": "raid_bdev1", 00:12:55.457 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:55.457 "strip_size_kb": 0, 00:12:55.457 "state": "online", 00:12:55.457 "raid_level": "raid1", 00:12:55.457 "superblock": true, 00:12:55.457 "num_base_bdevs": 2, 00:12:55.457 "num_base_bdevs_discovered": 2, 00:12:55.457 "num_base_bdevs_operational": 2, 00:12:55.457 "base_bdevs_list": [ 00:12:55.457 { 00:12:55.457 "name": "spare", 00:12:55.457 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:55.457 "is_configured": true, 00:12:55.457 "data_offset": 2048, 00:12:55.457 "data_size": 63488 00:12:55.457 }, 00:12:55.457 { 00:12:55.457 "name": "BaseBdev2", 00:12:55.457 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:55.457 "is_configured": true, 00:12:55.457 "data_offset": 2048, 00:12:55.457 "data_size": 63488 00:12:55.457 } 00:12:55.457 ] 00:12:55.457 }' 00:12:55.457 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.457 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.023 21:20:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.023 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.023 "name": "raid_bdev1", 00:12:56.023 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:56.023 "strip_size_kb": 0, 00:12:56.023 "state": "online", 00:12:56.023 "raid_level": "raid1", 00:12:56.023 "superblock": true, 00:12:56.023 "num_base_bdevs": 2, 00:12:56.023 "num_base_bdevs_discovered": 2, 00:12:56.023 "num_base_bdevs_operational": 2, 00:12:56.023 "base_bdevs_list": [ 00:12:56.023 { 00:12:56.023 "name": "spare", 00:12:56.023 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:56.023 "is_configured": true, 00:12:56.023 "data_offset": 2048, 00:12:56.023 "data_size": 63488 00:12:56.023 }, 00:12:56.023 { 00:12:56.023 "name": "BaseBdev2", 00:12:56.023 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:56.023 "is_configured": true, 00:12:56.023 "data_offset": 2048, 00:12:56.023 "data_size": 63488 00:12:56.023 } 00:12:56.023 ] 00:12:56.023 }' 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 [2024-11-26 21:20:14.136457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.283 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.283 "name": "raid_bdev1", 00:12:56.283 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:56.283 "strip_size_kb": 0, 00:12:56.283 "state": "online", 00:12:56.283 "raid_level": "raid1", 00:12:56.283 "superblock": true, 00:12:56.283 "num_base_bdevs": 2, 00:12:56.283 "num_base_bdevs_discovered": 1, 00:12:56.283 "num_base_bdevs_operational": 1, 00:12:56.283 "base_bdevs_list": [ 00:12:56.283 { 00:12:56.283 "name": null, 00:12:56.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.283 "is_configured": false, 00:12:56.283 "data_offset": 0, 00:12:56.283 "data_size": 63488 00:12:56.283 }, 00:12:56.283 { 00:12:56.283 "name": "BaseBdev2", 00:12:56.283 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:56.283 "is_configured": true, 00:12:56.283 "data_offset": 2048, 00:12:56.283 "data_size": 63488 00:12:56.283 } 00:12:56.283 ] 00:12:56.283 }' 00:12:56.283 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.283 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.543 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.543 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.543 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.543 [2024-11-26 21:20:14.595753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.543 [2024-11-26 21:20:14.596041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:56.543 [2024-11-26 21:20:14.596112] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:56.543 [2024-11-26 21:20:14.596176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.543 [2024-11-26 21:20:14.612488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:56.543 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.543 21:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:56.543 [2024-11-26 21:20:14.614323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.481 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.740 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.741 "name": "raid_bdev1", 00:12:57.741 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:57.741 "strip_size_kb": 0, 00:12:57.741 "state": "online", 00:12:57.741 "raid_level": "raid1", 00:12:57.741 "superblock": true, 00:12:57.741 "num_base_bdevs": 2, 00:12:57.741 "num_base_bdevs_discovered": 2, 00:12:57.741 "num_base_bdevs_operational": 2, 00:12:57.741 "process": { 00:12:57.741 "type": "rebuild", 00:12:57.741 "target": "spare", 00:12:57.741 "progress": { 00:12:57.741 "blocks": 20480, 00:12:57.741 "percent": 32 00:12:57.741 } 00:12:57.741 }, 00:12:57.741 "base_bdevs_list": [ 00:12:57.741 { 00:12:57.741 "name": "spare", 00:12:57.741 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:57.741 "is_configured": true, 00:12:57.741 "data_offset": 2048, 00:12:57.741 "data_size": 63488 00:12:57.741 }, 00:12:57.741 { 00:12:57.741 "name": "BaseBdev2", 00:12:57.741 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:57.741 "is_configured": true, 00:12:57.741 "data_offset": 2048, 00:12:57.741 "data_size": 63488 00:12:57.741 } 00:12:57.741 ] 00:12:57.741 }' 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.741 [2024-11-26 21:20:15.750101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:57.741 [2024-11-26 21:20:15.819511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:57.741 [2024-11-26 21:20:15.819572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.741 [2024-11-26 21:20:15.819587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:57.741 [2024-11-26 21:20:15.819596] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.741 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.001 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.001 "name": "raid_bdev1", 00:12:58.001 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:58.001 "strip_size_kb": 0, 00:12:58.001 "state": "online", 00:12:58.001 "raid_level": "raid1", 00:12:58.001 "superblock": true, 00:12:58.001 "num_base_bdevs": 2, 00:12:58.001 "num_base_bdevs_discovered": 1, 00:12:58.001 "num_base_bdevs_operational": 1, 00:12:58.001 "base_bdevs_list": [ 00:12:58.001 { 00:12:58.001 "name": null, 00:12:58.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.001 "is_configured": false, 00:12:58.001 "data_offset": 0, 00:12:58.001 "data_size": 63488 00:12:58.001 }, 00:12:58.001 { 00:12:58.001 "name": "BaseBdev2", 00:12:58.001 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:58.001 "is_configured": true, 00:12:58.001 "data_offset": 2048, 00:12:58.001 "data_size": 63488 00:12:58.001 } 00:12:58.001 ] 00:12:58.001 }' 00:12:58.001 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.001 21:20:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.259 21:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.259 21:20:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.259 21:20:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.259 [2024-11-26 21:20:16.314991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.259 [2024-11-26 21:20:16.315063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.259 [2024-11-26 21:20:16.315085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:58.259 [2024-11-26 21:20:16.315097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.259 [2024-11-26 21:20:16.315587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.259 [2024-11-26 21:20:16.315617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.259 [2024-11-26 21:20:16.315716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:58.259 [2024-11-26 21:20:16.315732] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:58.259 [2024-11-26 21:20:16.315742] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:58.259 [2024-11-26 21:20:16.315764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.259 [2024-11-26 21:20:16.332162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:58.259 spare 00:12:58.259 21:20:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.259 21:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:58.260 [2024-11-26 21:20:16.334023] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.197 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.456 "name": "raid_bdev1", 00:12:59.456 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:59.456 "strip_size_kb": 0, 00:12:59.456 "state": "online", 00:12:59.456 "raid_level": "raid1", 00:12:59.456 "superblock": true, 00:12:59.456 "num_base_bdevs": 2, 00:12:59.456 "num_base_bdevs_discovered": 2, 00:12:59.456 "num_base_bdevs_operational": 2, 00:12:59.456 "process": { 00:12:59.456 "type": "rebuild", 00:12:59.456 "target": "spare", 00:12:59.456 "progress": { 00:12:59.456 "blocks": 20480, 00:12:59.456 "percent": 32 00:12:59.456 } 00:12:59.456 }, 00:12:59.456 "base_bdevs_list": [ 00:12:59.456 { 00:12:59.456 "name": "spare", 00:12:59.456 "uuid": "5996c17f-4bba-5a43-b914-bca3f515bef7", 00:12:59.456 "is_configured": true, 00:12:59.456 "data_offset": 2048, 00:12:59.456 "data_size": 63488 00:12:59.456 }, 00:12:59.456 { 00:12:59.456 "name": "BaseBdev2", 00:12:59.456 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:59.456 "is_configured": true, 00:12:59.456 "data_offset": 2048, 00:12:59.456 "data_size": 63488 00:12:59.456 } 00:12:59.456 ] 00:12:59.456 }' 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 [2024-11-26 21:20:17.481644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.456 [2024-11-26 21:20:17.539043] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.456 [2024-11-26 21:20:17.539104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.456 [2024-11-26 21:20:17.539120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.456 [2024-11-26 21:20:17.539128] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.715 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.715 "name": "raid_bdev1", 00:12:59.715 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:59.715 "strip_size_kb": 0, 00:12:59.715 "state": "online", 00:12:59.715 "raid_level": "raid1", 00:12:59.715 "superblock": true, 00:12:59.715 "num_base_bdevs": 2, 00:12:59.715 "num_base_bdevs_discovered": 1, 00:12:59.715 "num_base_bdevs_operational": 1, 00:12:59.715 "base_bdevs_list": [ 00:12:59.715 { 00:12:59.715 "name": null, 00:12:59.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.715 "is_configured": false, 00:12:59.715 "data_offset": 0, 00:12:59.715 "data_size": 63488 00:12:59.715 }, 00:12:59.715 { 00:12:59.715 "name": "BaseBdev2", 00:12:59.715 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:59.715 "is_configured": true, 00:12:59.715 "data_offset": 2048, 00:12:59.715 "data_size": 63488 00:12:59.715 } 00:12:59.715 ] 00:12:59.715 }' 00:12:59.715 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.715 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.975 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.976 21:20:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.976 "name": "raid_bdev1", 00:12:59.976 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:12:59.976 "strip_size_kb": 0, 00:12:59.976 "state": "online", 00:12:59.976 "raid_level": "raid1", 00:12:59.976 "superblock": true, 00:12:59.976 "num_base_bdevs": 2, 00:12:59.976 "num_base_bdevs_discovered": 1, 00:12:59.976 "num_base_bdevs_operational": 1, 00:12:59.976 "base_bdevs_list": [ 00:12:59.976 { 00:12:59.976 "name": null, 00:12:59.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.976 "is_configured": false, 00:12:59.976 "data_offset": 0, 00:12:59.976 "data_size": 63488 00:12:59.976 }, 00:12:59.976 { 00:12:59.976 "name": "BaseBdev2", 00:12:59.976 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:12:59.976 "is_configured": true, 00:12:59.976 "data_offset": 2048, 00:12:59.976 "data_size": 63488 00:12:59.976 } 00:12:59.976 ] 00:12:59.976 }' 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.976 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.235 [2024-11-26 21:20:18.133310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:00.235 [2024-11-26 21:20:18.133368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.235 [2024-11-26 21:20:18.133400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:00.235 [2024-11-26 21:20:18.133412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.235 [2024-11-26 21:20:18.133868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.235 [2024-11-26 21:20:18.133894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.235 [2024-11-26 21:20:18.133992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:00.235 [2024-11-26 21:20:18.134012] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:00.235 [2024-11-26 21:20:18.134025] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:00.235 [2024-11-26 21:20:18.134035] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:00.235 BaseBdev1 00:13:00.235 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.235 21:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:01.182 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.183 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.183 "name": "raid_bdev1", 00:13:01.183 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:13:01.183 "strip_size_kb": 0, 00:13:01.183 "state": "online", 00:13:01.183 "raid_level": "raid1", 00:13:01.183 "superblock": true, 00:13:01.184 "num_base_bdevs": 2, 00:13:01.184 "num_base_bdevs_discovered": 1, 00:13:01.184 "num_base_bdevs_operational": 1, 00:13:01.184 "base_bdevs_list": [ 00:13:01.184 { 00:13:01.184 "name": null, 00:13:01.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.184 "is_configured": false, 00:13:01.184 "data_offset": 0, 00:13:01.184 "data_size": 63488 00:13:01.184 }, 00:13:01.184 { 00:13:01.184 "name": "BaseBdev2", 00:13:01.184 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:13:01.184 "is_configured": true, 00:13:01.184 "data_offset": 2048, 00:13:01.184 "data_size": 63488 00:13:01.184 } 00:13:01.184 ] 00:13:01.184 }' 00:13:01.184 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.184 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.754 "name": "raid_bdev1", 00:13:01.754 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:13:01.754 "strip_size_kb": 0, 00:13:01.754 "state": "online", 00:13:01.754 "raid_level": "raid1", 00:13:01.754 "superblock": true, 00:13:01.754 "num_base_bdevs": 2, 00:13:01.754 "num_base_bdevs_discovered": 1, 00:13:01.754 "num_base_bdevs_operational": 1, 00:13:01.754 "base_bdevs_list": [ 00:13:01.754 { 00:13:01.754 "name": null, 00:13:01.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.754 "is_configured": false, 00:13:01.754 "data_offset": 0, 00:13:01.754 "data_size": 63488 00:13:01.754 }, 00:13:01.754 { 00:13:01.754 "name": "BaseBdev2", 00:13:01.754 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:13:01.754 "is_configured": true, 00:13:01.754 "data_offset": 2048, 00:13:01.754 "data_size": 63488 00:13:01.754 } 00:13:01.754 ] 00:13:01.754 }' 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.754 [2024-11-26 21:20:19.755008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.754 [2024-11-26 21:20:19.755264] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:01.754 [2024-11-26 21:20:19.755284] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:01.754 request: 00:13:01.754 { 00:13:01.754 "base_bdev": "BaseBdev1", 00:13:01.754 "raid_bdev": "raid_bdev1", 00:13:01.754 "method": "bdev_raid_add_base_bdev", 00:13:01.754 "req_id": 1 00:13:01.754 } 00:13:01.754 Got JSON-RPC error response 00:13:01.754 response: 00:13:01.754 { 00:13:01.754 "code": -22, 00:13:01.754 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:01.754 } 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.754 21:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.688 "name": "raid_bdev1", 00:13:02.688 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:13:02.688 "strip_size_kb": 0, 00:13:02.688 "state": "online", 00:13:02.688 "raid_level": "raid1", 00:13:02.688 "superblock": true, 00:13:02.688 "num_base_bdevs": 2, 00:13:02.688 "num_base_bdevs_discovered": 1, 00:13:02.688 "num_base_bdevs_operational": 1, 00:13:02.688 "base_bdevs_list": [ 00:13:02.688 { 00:13:02.688 "name": null, 00:13:02.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.688 "is_configured": false, 00:13:02.688 "data_offset": 0, 00:13:02.688 "data_size": 63488 00:13:02.688 }, 00:13:02.688 { 00:13:02.688 "name": "BaseBdev2", 00:13:02.688 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:13:02.688 "is_configured": true, 00:13:02.688 "data_offset": 2048, 00:13:02.688 "data_size": 63488 00:13:02.688 } 00:13:02.688 ] 00:13:02.688 }' 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.688 21:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.255 "name": "raid_bdev1", 00:13:03.255 "uuid": "b2110763-e7ba-408d-846c-cd67fc01a943", 00:13:03.255 "strip_size_kb": 0, 00:13:03.255 "state": "online", 00:13:03.255 "raid_level": "raid1", 00:13:03.255 "superblock": true, 00:13:03.255 "num_base_bdevs": 2, 00:13:03.255 "num_base_bdevs_discovered": 1, 00:13:03.255 "num_base_bdevs_operational": 1, 00:13:03.255 "base_bdevs_list": [ 00:13:03.255 { 00:13:03.255 "name": null, 00:13:03.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.255 "is_configured": false, 00:13:03.255 "data_offset": 0, 00:13:03.255 "data_size": 63488 00:13:03.255 }, 00:13:03.255 { 00:13:03.255 "name": "BaseBdev2", 00:13:03.255 "uuid": "4df1347b-05ee-5499-bedc-420181764a2c", 00:13:03.255 "is_configured": true, 00:13:03.255 "data_offset": 2048, 00:13:03.255 "data_size": 63488 00:13:03.255 } 00:13:03.255 ] 00:13:03.255 }' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76636 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76636 ']' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76636 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76636 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.255 killing process with pid 76636 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76636' 00:13:03.255 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76636 00:13:03.255 Received shutdown signal, test time was about 16.878244 seconds 00:13:03.255 00:13:03.255 Latency(us) 00:13:03.255 [2024-11-26T21:20:21.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.255 [2024-11-26T21:20:21.411Z] =================================================================================================================== 00:13:03.255 [2024-11-26T21:20:21.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.255 [2024-11-26 21:20:21.328748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.256 21:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76636 00:13:03.256 [2024-11-26 21:20:21.328930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.256 [2024-11-26 21:20:21.329024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.256 [2024-11-26 21:20:21.329047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:03.514 [2024-11-26 21:20:21.642782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:05.419 00:13:05.419 real 0m20.533s 00:13:05.419 user 0m26.632s 00:13:05.419 sys 0m2.136s 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.419 ************************************ 00:13:05.419 END TEST raid_rebuild_test_sb_io 00:13:05.419 ************************************ 00:13:05.419 21:20:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:05.419 21:20:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:05.419 21:20:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:05.419 21:20:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.419 21:20:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.419 ************************************ 00:13:05.419 START TEST raid_rebuild_test 00:13:05.419 ************************************ 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77328 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77328 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77328 ']' 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.419 21:20:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.419 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:05.419 Zero copy mechanism will not be used. 00:13:05.419 [2024-11-26 21:20:23.400757] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:05.419 [2024-11-26 21:20:23.400881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77328 ] 00:13:05.419 [2024-11-26 21:20:23.564159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.676 [2024-11-26 21:20:23.724018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.934 [2024-11-26 21:20:24.011312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.934 [2024-11-26 21:20:24.011391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.193 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.193 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:06.193 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.193 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.193 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.193 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.451 BaseBdev1_malloc 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.451 [2024-11-26 21:20:24.356157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.451 [2024-11-26 21:20:24.356238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.451 [2024-11-26 21:20:24.356270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.451 [2024-11-26 21:20:24.356286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.451 [2024-11-26 21:20:24.359135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.451 [2024-11-26 21:20:24.359181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.451 BaseBdev1 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.451 BaseBdev2_malloc 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.451 [2024-11-26 21:20:24.426565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:06.451 [2024-11-26 21:20:24.426640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.451 [2024-11-26 21:20:24.426671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.451 [2024-11-26 21:20:24.426686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.451 [2024-11-26 21:20:24.429561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.451 [2024-11-26 21:20:24.429606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.451 BaseBdev2 00:13:06.451 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.452 BaseBdev3_malloc 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.452 [2024-11-26 21:20:24.508211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:06.452 [2024-11-26 21:20:24.508278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.452 [2024-11-26 21:20:24.508307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:06.452 [2024-11-26 21:20:24.508321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.452 [2024-11-26 21:20:24.511083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.452 [2024-11-26 21:20:24.511125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:06.452 BaseBdev3 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.452 BaseBdev4_malloc 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.452 [2024-11-26 21:20:24.576996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:06.452 [2024-11-26 21:20:24.577066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.452 [2024-11-26 21:20:24.577091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:06.452 [2024-11-26 21:20:24.577105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.452 [2024-11-26 21:20:24.579913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.452 [2024-11-26 21:20:24.579973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:06.452 BaseBdev4 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.452 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.713 spare_malloc 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.713 spare_delay 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.713 [2024-11-26 21:20:24.658124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.713 [2024-11-26 21:20:24.658187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.713 [2024-11-26 21:20:24.658208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:06.713 [2024-11-26 21:20:24.658222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.713 [2024-11-26 21:20:24.661016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.713 [2024-11-26 21:20:24.661057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.713 spare 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.713 [2024-11-26 21:20:24.670179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.713 [2024-11-26 21:20:24.672658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.713 [2024-11-26 21:20:24.672737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.713 [2024-11-26 21:20:24.672800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:06.713 [2024-11-26 21:20:24.672896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:06.713 [2024-11-26 21:20:24.672912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:06.713 [2024-11-26 21:20:24.673240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:06.713 [2024-11-26 21:20:24.673472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:06.713 [2024-11-26 21:20:24.673495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:06.713 [2024-11-26 21:20:24.673701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.713 "name": "raid_bdev1", 00:13:06.713 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:06.713 "strip_size_kb": 0, 00:13:06.713 "state": "online", 00:13:06.713 "raid_level": "raid1", 00:13:06.713 "superblock": false, 00:13:06.713 "num_base_bdevs": 4, 00:13:06.713 "num_base_bdevs_discovered": 4, 00:13:06.713 "num_base_bdevs_operational": 4, 00:13:06.713 "base_bdevs_list": [ 00:13:06.713 { 00:13:06.713 "name": "BaseBdev1", 00:13:06.713 "uuid": "67548a71-a860-5ee3-871f-00cbae75ab29", 00:13:06.713 "is_configured": true, 00:13:06.713 "data_offset": 0, 00:13:06.713 "data_size": 65536 00:13:06.713 }, 00:13:06.713 { 00:13:06.713 "name": "BaseBdev2", 00:13:06.713 "uuid": "f2d637bc-7fb9-5a4e-8c18-0104f762145b", 00:13:06.713 "is_configured": true, 00:13:06.713 "data_offset": 0, 00:13:06.713 "data_size": 65536 00:13:06.713 }, 00:13:06.713 { 00:13:06.713 "name": "BaseBdev3", 00:13:06.713 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:06.713 "is_configured": true, 00:13:06.713 "data_offset": 0, 00:13:06.713 "data_size": 65536 00:13:06.713 }, 00:13:06.713 { 00:13:06.713 "name": "BaseBdev4", 00:13:06.713 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:06.713 "is_configured": true, 00:13:06.713 "data_offset": 0, 00:13:06.713 "data_size": 65536 00:13:06.713 } 00:13:06.713 ] 00:13:06.713 }' 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.713 21:20:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.995 [2024-11-26 21:20:25.093866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:06.995 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:07.254 [2024-11-26 21:20:25.341199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:07.254 /dev/nbd0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.254 1+0 records in 00:13:07.254 1+0 records out 00:13:07.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404828 s, 10.1 MB/s 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:07.254 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.533 21:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.533 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:07.533 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:07.533 21:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:14.099 65536+0 records in 00:13:14.099 65536+0 records out 00:13:14.099 33554432 bytes (34 MB, 32 MiB) copied, 5.73645 s, 5.8 MB/s 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.099 [2024-11-26 21:20:31.356580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.099 [2024-11-26 21:20:31.372641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.099 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.100 "name": "raid_bdev1", 00:13:14.100 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:14.100 "strip_size_kb": 0, 00:13:14.100 "state": "online", 00:13:14.100 "raid_level": "raid1", 00:13:14.100 "superblock": false, 00:13:14.100 "num_base_bdevs": 4, 00:13:14.100 "num_base_bdevs_discovered": 3, 00:13:14.100 "num_base_bdevs_operational": 3, 00:13:14.100 "base_bdevs_list": [ 00:13:14.100 { 00:13:14.100 "name": null, 00:13:14.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.100 "is_configured": false, 00:13:14.100 "data_offset": 0, 00:13:14.100 "data_size": 65536 00:13:14.100 }, 00:13:14.100 { 00:13:14.100 "name": "BaseBdev2", 00:13:14.100 "uuid": "f2d637bc-7fb9-5a4e-8c18-0104f762145b", 00:13:14.100 "is_configured": true, 00:13:14.100 "data_offset": 0, 00:13:14.100 "data_size": 65536 00:13:14.100 }, 00:13:14.100 { 00:13:14.100 "name": "BaseBdev3", 00:13:14.100 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:14.100 "is_configured": true, 00:13:14.100 "data_offset": 0, 00:13:14.100 "data_size": 65536 00:13:14.100 }, 00:13:14.100 { 00:13:14.100 "name": "BaseBdev4", 00:13:14.100 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:14.100 "is_configured": true, 00:13:14.100 "data_offset": 0, 00:13:14.100 "data_size": 65536 00:13:14.100 } 00:13:14.100 ] 00:13:14.100 }' 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.100 [2024-11-26 21:20:31.780091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.100 [2024-11-26 21:20:31.797055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.100 21:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:14.100 [2024-11-26 21:20:31.799228] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.692 21:20:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.951 "name": "raid_bdev1", 00:13:14.951 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:14.951 "strip_size_kb": 0, 00:13:14.951 "state": "online", 00:13:14.951 "raid_level": "raid1", 00:13:14.951 "superblock": false, 00:13:14.951 "num_base_bdevs": 4, 00:13:14.951 "num_base_bdevs_discovered": 4, 00:13:14.951 "num_base_bdevs_operational": 4, 00:13:14.951 "process": { 00:13:14.951 "type": "rebuild", 00:13:14.951 "target": "spare", 00:13:14.951 "progress": { 00:13:14.951 "blocks": 20480, 00:13:14.951 "percent": 31 00:13:14.951 } 00:13:14.951 }, 00:13:14.951 "base_bdevs_list": [ 00:13:14.951 { 00:13:14.951 "name": "spare", 00:13:14.951 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:14.951 "is_configured": true, 00:13:14.951 "data_offset": 0, 00:13:14.951 "data_size": 65536 00:13:14.951 }, 00:13:14.951 { 00:13:14.951 "name": "BaseBdev2", 00:13:14.951 "uuid": "f2d637bc-7fb9-5a4e-8c18-0104f762145b", 00:13:14.951 "is_configured": true, 00:13:14.951 "data_offset": 0, 00:13:14.951 "data_size": 65536 00:13:14.951 }, 00:13:14.951 { 00:13:14.951 "name": "BaseBdev3", 00:13:14.951 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:14.951 "is_configured": true, 00:13:14.951 "data_offset": 0, 00:13:14.951 "data_size": 65536 00:13:14.951 }, 00:13:14.951 { 00:13:14.951 "name": "BaseBdev4", 00:13:14.951 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:14.951 "is_configured": true, 00:13:14.951 "data_offset": 0, 00:13:14.951 "data_size": 65536 00:13:14.951 } 00:13:14.951 ] 00:13:14.951 }' 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.951 21:20:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.951 [2024-11-26 21:20:32.923060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.951 [2024-11-26 21:20:33.008652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.951 [2024-11-26 21:20:33.008722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.951 [2024-11-26 21:20:33.008740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.951 [2024-11-26 21:20:33.008751] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.951 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.952 "name": "raid_bdev1", 00:13:14.952 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:14.952 "strip_size_kb": 0, 00:13:14.952 "state": "online", 00:13:14.952 "raid_level": "raid1", 00:13:14.952 "superblock": false, 00:13:14.952 "num_base_bdevs": 4, 00:13:14.952 "num_base_bdevs_discovered": 3, 00:13:14.952 "num_base_bdevs_operational": 3, 00:13:14.952 "base_bdevs_list": [ 00:13:14.952 { 00:13:14.952 "name": null, 00:13:14.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.952 "is_configured": false, 00:13:14.952 "data_offset": 0, 00:13:14.952 "data_size": 65536 00:13:14.952 }, 00:13:14.952 { 00:13:14.952 "name": "BaseBdev2", 00:13:14.952 "uuid": "f2d637bc-7fb9-5a4e-8c18-0104f762145b", 00:13:14.952 "is_configured": true, 00:13:14.952 "data_offset": 0, 00:13:14.952 "data_size": 65536 00:13:14.952 }, 00:13:14.952 { 00:13:14.952 "name": "BaseBdev3", 00:13:14.952 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:14.952 "is_configured": true, 00:13:14.952 "data_offset": 0, 00:13:14.952 "data_size": 65536 00:13:14.952 }, 00:13:14.952 { 00:13:14.952 "name": "BaseBdev4", 00:13:14.952 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:14.952 "is_configured": true, 00:13:14.952 "data_offset": 0, 00:13:14.952 "data_size": 65536 00:13:14.952 } 00:13:14.952 ] 00:13:14.952 }' 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.952 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.519 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.519 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.519 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.519 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.519 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.520 "name": "raid_bdev1", 00:13:15.520 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:15.520 "strip_size_kb": 0, 00:13:15.520 "state": "online", 00:13:15.520 "raid_level": "raid1", 00:13:15.520 "superblock": false, 00:13:15.520 "num_base_bdevs": 4, 00:13:15.520 "num_base_bdevs_discovered": 3, 00:13:15.520 "num_base_bdevs_operational": 3, 00:13:15.520 "base_bdevs_list": [ 00:13:15.520 { 00:13:15.520 "name": null, 00:13:15.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.520 "is_configured": false, 00:13:15.520 "data_offset": 0, 00:13:15.520 "data_size": 65536 00:13:15.520 }, 00:13:15.520 { 00:13:15.520 "name": "BaseBdev2", 00:13:15.520 "uuid": "f2d637bc-7fb9-5a4e-8c18-0104f762145b", 00:13:15.520 "is_configured": true, 00:13:15.520 "data_offset": 0, 00:13:15.520 "data_size": 65536 00:13:15.520 }, 00:13:15.520 { 00:13:15.520 "name": "BaseBdev3", 00:13:15.520 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:15.520 "is_configured": true, 00:13:15.520 "data_offset": 0, 00:13:15.520 "data_size": 65536 00:13:15.520 }, 00:13:15.520 { 00:13:15.520 "name": "BaseBdev4", 00:13:15.520 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:15.520 "is_configured": true, 00:13:15.520 "data_offset": 0, 00:13:15.520 "data_size": 65536 00:13:15.520 } 00:13:15.520 ] 00:13:15.520 }' 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.520 [2024-11-26 21:20:33.594887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.520 [2024-11-26 21:20:33.608812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.520 21:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:15.520 [2024-11-26 21:20:33.610983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.899 "name": "raid_bdev1", 00:13:16.899 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:16.899 "strip_size_kb": 0, 00:13:16.899 "state": "online", 00:13:16.899 "raid_level": "raid1", 00:13:16.899 "superblock": false, 00:13:16.899 "num_base_bdevs": 4, 00:13:16.899 "num_base_bdevs_discovered": 4, 00:13:16.899 "num_base_bdevs_operational": 4, 00:13:16.899 "process": { 00:13:16.899 "type": "rebuild", 00:13:16.899 "target": "spare", 00:13:16.899 "progress": { 00:13:16.899 "blocks": 20480, 00:13:16.899 "percent": 31 00:13:16.899 } 00:13:16.899 }, 00:13:16.899 "base_bdevs_list": [ 00:13:16.899 { 00:13:16.899 "name": "spare", 00:13:16.899 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 }, 00:13:16.899 { 00:13:16.899 "name": "BaseBdev2", 00:13:16.899 "uuid": "f2d637bc-7fb9-5a4e-8c18-0104f762145b", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 }, 00:13:16.899 { 00:13:16.899 "name": "BaseBdev3", 00:13:16.899 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 }, 00:13:16.899 { 00:13:16.899 "name": "BaseBdev4", 00:13:16.899 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 } 00:13:16.899 ] 00:13:16.899 }' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.899 [2024-11-26 21:20:34.778326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.899 [2024-11-26 21:20:34.819935] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.899 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.899 "name": "raid_bdev1", 00:13:16.899 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:16.899 "strip_size_kb": 0, 00:13:16.899 "state": "online", 00:13:16.899 "raid_level": "raid1", 00:13:16.899 "superblock": false, 00:13:16.899 "num_base_bdevs": 4, 00:13:16.899 "num_base_bdevs_discovered": 3, 00:13:16.899 "num_base_bdevs_operational": 3, 00:13:16.899 "process": { 00:13:16.899 "type": "rebuild", 00:13:16.899 "target": "spare", 00:13:16.899 "progress": { 00:13:16.899 "blocks": 24576, 00:13:16.899 "percent": 37 00:13:16.899 } 00:13:16.899 }, 00:13:16.899 "base_bdevs_list": [ 00:13:16.899 { 00:13:16.899 "name": "spare", 00:13:16.899 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 }, 00:13:16.899 { 00:13:16.899 "name": null, 00:13:16.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.899 "is_configured": false, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 }, 00:13:16.899 { 00:13:16.899 "name": "BaseBdev3", 00:13:16.899 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.899 "data_size": 65536 00:13:16.899 }, 00:13:16.899 { 00:13:16.899 "name": "BaseBdev4", 00:13:16.899 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:16.899 "is_configured": true, 00:13:16.899 "data_offset": 0, 00:13:16.900 "data_size": 65536 00:13:16.900 } 00:13:16.900 ] 00:13:16.900 }' 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.900 21:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.900 "name": "raid_bdev1", 00:13:16.900 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:16.900 "strip_size_kb": 0, 00:13:16.900 "state": "online", 00:13:16.900 "raid_level": "raid1", 00:13:16.900 "superblock": false, 00:13:16.900 "num_base_bdevs": 4, 00:13:16.900 "num_base_bdevs_discovered": 3, 00:13:16.900 "num_base_bdevs_operational": 3, 00:13:16.900 "process": { 00:13:16.900 "type": "rebuild", 00:13:16.900 "target": "spare", 00:13:16.900 "progress": { 00:13:16.900 "blocks": 26624, 00:13:16.900 "percent": 40 00:13:16.900 } 00:13:16.900 }, 00:13:16.900 "base_bdevs_list": [ 00:13:16.900 { 00:13:16.900 "name": "spare", 00:13:16.900 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:16.900 "is_configured": true, 00:13:16.900 "data_offset": 0, 00:13:16.900 "data_size": 65536 00:13:16.900 }, 00:13:16.900 { 00:13:16.900 "name": null, 00:13:16.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.900 "is_configured": false, 00:13:16.900 "data_offset": 0, 00:13:16.900 "data_size": 65536 00:13:16.900 }, 00:13:16.900 { 00:13:16.900 "name": "BaseBdev3", 00:13:16.900 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:16.900 "is_configured": true, 00:13:16.900 "data_offset": 0, 00:13:16.900 "data_size": 65536 00:13:16.900 }, 00:13:16.900 { 00:13:16.900 "name": "BaseBdev4", 00:13:16.900 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:16.900 "is_configured": true, 00:13:16.900 "data_offset": 0, 00:13:16.900 "data_size": 65536 00:13:16.900 } 00:13:16.900 ] 00:13:16.900 }' 00:13:16.900 21:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.900 21:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.900 21:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.160 21:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.160 21:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.100 "name": "raid_bdev1", 00:13:18.100 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:18.100 "strip_size_kb": 0, 00:13:18.100 "state": "online", 00:13:18.100 "raid_level": "raid1", 00:13:18.100 "superblock": false, 00:13:18.100 "num_base_bdevs": 4, 00:13:18.100 "num_base_bdevs_discovered": 3, 00:13:18.100 "num_base_bdevs_operational": 3, 00:13:18.100 "process": { 00:13:18.100 "type": "rebuild", 00:13:18.100 "target": "spare", 00:13:18.100 "progress": { 00:13:18.100 "blocks": 49152, 00:13:18.100 "percent": 75 00:13:18.100 } 00:13:18.100 }, 00:13:18.100 "base_bdevs_list": [ 00:13:18.100 { 00:13:18.100 "name": "spare", 00:13:18.100 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:18.100 "is_configured": true, 00:13:18.100 "data_offset": 0, 00:13:18.100 "data_size": 65536 00:13:18.100 }, 00:13:18.100 { 00:13:18.100 "name": null, 00:13:18.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.100 "is_configured": false, 00:13:18.100 "data_offset": 0, 00:13:18.100 "data_size": 65536 00:13:18.100 }, 00:13:18.100 { 00:13:18.100 "name": "BaseBdev3", 00:13:18.100 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:18.100 "is_configured": true, 00:13:18.100 "data_offset": 0, 00:13:18.100 "data_size": 65536 00:13:18.100 }, 00:13:18.100 { 00:13:18.100 "name": "BaseBdev4", 00:13:18.100 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:18.100 "is_configured": true, 00:13:18.100 "data_offset": 0, 00:13:18.100 "data_size": 65536 00:13:18.100 } 00:13:18.100 ] 00:13:18.100 }' 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.100 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.101 21:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:19.039 [2024-11-26 21:20:36.834592] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:19.039 [2024-11-26 21:20:36.834699] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:19.039 [2024-11-26 21:20:36.834746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.299 "name": "raid_bdev1", 00:13:19.299 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:19.299 "strip_size_kb": 0, 00:13:19.299 "state": "online", 00:13:19.299 "raid_level": "raid1", 00:13:19.299 "superblock": false, 00:13:19.299 "num_base_bdevs": 4, 00:13:19.299 "num_base_bdevs_discovered": 3, 00:13:19.299 "num_base_bdevs_operational": 3, 00:13:19.299 "base_bdevs_list": [ 00:13:19.299 { 00:13:19.299 "name": "spare", 00:13:19.299 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:19.299 "is_configured": true, 00:13:19.299 "data_offset": 0, 00:13:19.299 "data_size": 65536 00:13:19.299 }, 00:13:19.299 { 00:13:19.299 "name": null, 00:13:19.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.299 "is_configured": false, 00:13:19.299 "data_offset": 0, 00:13:19.299 "data_size": 65536 00:13:19.299 }, 00:13:19.299 { 00:13:19.299 "name": "BaseBdev3", 00:13:19.299 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:19.299 "is_configured": true, 00:13:19.299 "data_offset": 0, 00:13:19.299 "data_size": 65536 00:13:19.299 }, 00:13:19.299 { 00:13:19.299 "name": "BaseBdev4", 00:13:19.299 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:19.299 "is_configured": true, 00:13:19.299 "data_offset": 0, 00:13:19.299 "data_size": 65536 00:13:19.299 } 00:13:19.299 ] 00:13:19.299 }' 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.299 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.299 "name": "raid_bdev1", 00:13:19.299 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:19.299 "strip_size_kb": 0, 00:13:19.299 "state": "online", 00:13:19.299 "raid_level": "raid1", 00:13:19.299 "superblock": false, 00:13:19.299 "num_base_bdevs": 4, 00:13:19.299 "num_base_bdevs_discovered": 3, 00:13:19.299 "num_base_bdevs_operational": 3, 00:13:19.299 "base_bdevs_list": [ 00:13:19.299 { 00:13:19.299 "name": "spare", 00:13:19.299 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:19.299 "is_configured": true, 00:13:19.299 "data_offset": 0, 00:13:19.299 "data_size": 65536 00:13:19.299 }, 00:13:19.299 { 00:13:19.299 "name": null, 00:13:19.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.299 "is_configured": false, 00:13:19.299 "data_offset": 0, 00:13:19.299 "data_size": 65536 00:13:19.299 }, 00:13:19.300 { 00:13:19.300 "name": "BaseBdev3", 00:13:19.300 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:19.300 "is_configured": true, 00:13:19.300 "data_offset": 0, 00:13:19.300 "data_size": 65536 00:13:19.300 }, 00:13:19.300 { 00:13:19.300 "name": "BaseBdev4", 00:13:19.300 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:19.300 "is_configured": true, 00:13:19.300 "data_offset": 0, 00:13:19.300 "data_size": 65536 00:13:19.300 } 00:13:19.300 ] 00:13:19.300 }' 00:13:19.300 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.300 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.300 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.559 "name": "raid_bdev1", 00:13:19.559 "uuid": "4d343949-77fb-483b-b241-ad267269356e", 00:13:19.559 "strip_size_kb": 0, 00:13:19.559 "state": "online", 00:13:19.559 "raid_level": "raid1", 00:13:19.559 "superblock": false, 00:13:19.559 "num_base_bdevs": 4, 00:13:19.559 "num_base_bdevs_discovered": 3, 00:13:19.559 "num_base_bdevs_operational": 3, 00:13:19.559 "base_bdevs_list": [ 00:13:19.559 { 00:13:19.559 "name": "spare", 00:13:19.559 "uuid": "9b3c517b-5a85-56c8-8384-f8176ae27714", 00:13:19.559 "is_configured": true, 00:13:19.559 "data_offset": 0, 00:13:19.559 "data_size": 65536 00:13:19.559 }, 00:13:19.559 { 00:13:19.559 "name": null, 00:13:19.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.559 "is_configured": false, 00:13:19.559 "data_offset": 0, 00:13:19.559 "data_size": 65536 00:13:19.559 }, 00:13:19.559 { 00:13:19.559 "name": "BaseBdev3", 00:13:19.559 "uuid": "d03984ff-7542-51ed-9697-359a466d270a", 00:13:19.559 "is_configured": true, 00:13:19.559 "data_offset": 0, 00:13:19.559 "data_size": 65536 00:13:19.559 }, 00:13:19.559 { 00:13:19.559 "name": "BaseBdev4", 00:13:19.559 "uuid": "8b61d6e2-67b9-5e8f-b010-70ed2cd00da9", 00:13:19.559 "is_configured": true, 00:13:19.559 "data_offset": 0, 00:13:19.559 "data_size": 65536 00:13:19.559 } 00:13:19.559 ] 00:13:19.559 }' 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.559 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 [2024-11-26 21:20:37.920496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.819 [2024-11-26 21:20:37.920543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.819 [2024-11-26 21:20:37.920640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.819 [2024-11-26 21:20:37.920731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.819 [2024-11-26 21:20:37.920743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.819 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:20.078 21:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:20.078 /dev/nbd0 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.078 1+0 records in 00:13:20.078 1+0 records out 00:13:20.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404892 s, 10.1 MB/s 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:20.078 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:20.338 /dev/nbd1 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.338 1+0 records in 00:13:20.338 1+0 records out 00:13:20.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580681 s, 7.1 MB/s 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:20.338 21:20:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.600 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.859 21:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.118 21:20:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77328 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77328 ']' 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77328 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77328 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.119 killing process with pid 77328 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77328' 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77328 00:13:21.119 Received shutdown signal, test time was about 60.000000 seconds 00:13:21.119 00:13:21.119 Latency(us) 00:13:21.119 [2024-11-26T21:20:39.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.119 [2024-11-26T21:20:39.275Z] =================================================================================================================== 00:13:21.119 [2024-11-26T21:20:39.275Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:21.119 [2024-11-26 21:20:39.132953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.119 21:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77328 00:13:21.689 [2024-11-26 21:20:39.656410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:23.069 00:13:23.069 real 0m17.557s 00:13:23.069 user 0m19.243s 00:13:23.069 sys 0m3.383s 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.069 ************************************ 00:13:23.069 END TEST raid_rebuild_test 00:13:23.069 ************************************ 00:13:23.069 21:20:40 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:23.069 21:20:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:23.069 21:20:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.069 21:20:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.069 ************************************ 00:13:23.069 START TEST raid_rebuild_test_sb 00:13:23.069 ************************************ 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:23.069 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77775 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77775 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77775 ']' 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.070 21:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.070 Zero copy mechanism will not be used. 00:13:23.070 [2024-11-26 21:20:41.029510] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:23.070 [2024-11-26 21:20:41.029640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77775 ] 00:13:23.070 [2024-11-26 21:20:41.185864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.329 [2024-11-26 21:20:41.321055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.588 [2024-11-26 21:20:41.552856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.588 [2024-11-26 21:20:41.552924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.849 BaseBdev1_malloc 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.849 [2024-11-26 21:20:41.899568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:23.849 [2024-11-26 21:20:41.899639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.849 [2024-11-26 21:20:41.899662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.849 [2024-11-26 21:20:41.899674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.849 [2024-11-26 21:20:41.902017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.849 [2024-11-26 21:20:41.902050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.849 BaseBdev1 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.849 BaseBdev2_malloc 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.849 [2024-11-26 21:20:41.954885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:23.849 [2024-11-26 21:20:41.954958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.849 [2024-11-26 21:20:41.954995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:23.849 [2024-11-26 21:20:41.955007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.849 [2024-11-26 21:20:41.957438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.849 [2024-11-26 21:20:41.957471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:23.849 BaseBdev2 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.849 21:20:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 BaseBdev3_malloc 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 [2024-11-26 21:20:42.029149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:24.110 [2024-11-26 21:20:42.029205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.110 [2024-11-26 21:20:42.029227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:24.110 [2024-11-26 21:20:42.029239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.110 [2024-11-26 21:20:42.031559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.110 [2024-11-26 21:20:42.031610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:24.110 BaseBdev3 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 BaseBdev4_malloc 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 [2024-11-26 21:20:42.090564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:24.110 [2024-11-26 21:20:42.090623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.110 [2024-11-26 21:20:42.090643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:24.110 [2024-11-26 21:20:42.090654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.110 [2024-11-26 21:20:42.092907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.110 [2024-11-26 21:20:42.092943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:24.110 BaseBdev4 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 spare_malloc 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 spare_delay 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 [2024-11-26 21:20:42.164627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.110 [2024-11-26 21:20:42.164788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.110 [2024-11-26 21:20:42.164810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:24.110 [2024-11-26 21:20:42.164822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.110 [2024-11-26 21:20:42.167164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.110 [2024-11-26 21:20:42.167201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.110 spare 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.110 [2024-11-26 21:20:42.176659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.110 [2024-11-26 21:20:42.178723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.110 [2024-11-26 21:20:42.178786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.110 [2024-11-26 21:20:42.178836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.110 [2024-11-26 21:20:42.179031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:24.110 [2024-11-26 21:20:42.179060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.110 [2024-11-26 21:20:42.179308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:24.110 [2024-11-26 21:20:42.179492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:24.110 [2024-11-26 21:20:42.179504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:24.110 [2024-11-26 21:20:42.179653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.110 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.111 "name": "raid_bdev1", 00:13:24.111 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:24.111 "strip_size_kb": 0, 00:13:24.111 "state": "online", 00:13:24.111 "raid_level": "raid1", 00:13:24.111 "superblock": true, 00:13:24.111 "num_base_bdevs": 4, 00:13:24.111 "num_base_bdevs_discovered": 4, 00:13:24.111 "num_base_bdevs_operational": 4, 00:13:24.111 "base_bdevs_list": [ 00:13:24.111 { 00:13:24.111 "name": "BaseBdev1", 00:13:24.111 "uuid": "d43eebff-3b52-52e0-a7d0-204e4e2e87aa", 00:13:24.111 "is_configured": true, 00:13:24.111 "data_offset": 2048, 00:13:24.111 "data_size": 63488 00:13:24.111 }, 00:13:24.111 { 00:13:24.111 "name": "BaseBdev2", 00:13:24.111 "uuid": "20f1510e-61a5-5f51-86c8-2137fbd9dcbc", 00:13:24.111 "is_configured": true, 00:13:24.111 "data_offset": 2048, 00:13:24.111 "data_size": 63488 00:13:24.111 }, 00:13:24.111 { 00:13:24.111 "name": "BaseBdev3", 00:13:24.111 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:24.111 "is_configured": true, 00:13:24.111 "data_offset": 2048, 00:13:24.111 "data_size": 63488 00:13:24.111 }, 00:13:24.111 { 00:13:24.111 "name": "BaseBdev4", 00:13:24.111 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:24.111 "is_configured": true, 00:13:24.111 "data_offset": 2048, 00:13:24.111 "data_size": 63488 00:13:24.111 } 00:13:24.111 ] 00:13:24.111 }' 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.111 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.681 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.682 [2024-11-26 21:20:42.608455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.682 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:24.942 [2024-11-26 21:20:42.879689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:24.942 /dev/nbd0 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.942 1+0 records in 00:13:24.942 1+0 records out 00:13:24.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347444 s, 11.8 MB/s 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:24.942 21:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:31.550 63488+0 records in 00:13:31.550 63488+0 records out 00:13:31.550 32505856 bytes (33 MB, 31 MiB) copied, 5.51204 s, 5.9 MB/s 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.550 [2024-11-26 21:20:48.650759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 [2024-11-26 21:20:48.689024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.550 "name": "raid_bdev1", 00:13:31.550 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:31.550 "strip_size_kb": 0, 00:13:31.550 "state": "online", 00:13:31.550 "raid_level": "raid1", 00:13:31.550 "superblock": true, 00:13:31.550 "num_base_bdevs": 4, 00:13:31.550 "num_base_bdevs_discovered": 3, 00:13:31.550 "num_base_bdevs_operational": 3, 00:13:31.550 "base_bdevs_list": [ 00:13:31.550 { 00:13:31.550 "name": null, 00:13:31.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.550 "is_configured": false, 00:13:31.550 "data_offset": 0, 00:13:31.550 "data_size": 63488 00:13:31.550 }, 00:13:31.550 { 00:13:31.550 "name": "BaseBdev2", 00:13:31.550 "uuid": "20f1510e-61a5-5f51-86c8-2137fbd9dcbc", 00:13:31.550 "is_configured": true, 00:13:31.550 "data_offset": 2048, 00:13:31.550 "data_size": 63488 00:13:31.550 }, 00:13:31.550 { 00:13:31.550 "name": "BaseBdev3", 00:13:31.550 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:31.550 "is_configured": true, 00:13:31.550 "data_offset": 2048, 00:13:31.550 "data_size": 63488 00:13:31.550 }, 00:13:31.550 { 00:13:31.550 "name": "BaseBdev4", 00:13:31.550 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:31.550 "is_configured": true, 00:13:31.550 "data_offset": 2048, 00:13:31.550 "data_size": 63488 00:13:31.550 } 00:13:31.550 ] 00:13:31.550 }' 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.550 21:20:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 21:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.550 21:20:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.550 21:20:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 [2024-11-26 21:20:49.176224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.550 [2024-11-26 21:20:49.191814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:31.550 21:20:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.550 21:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:31.550 [2024-11-26 21:20:49.193960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.120 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.121 "name": "raid_bdev1", 00:13:32.121 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:32.121 "strip_size_kb": 0, 00:13:32.121 "state": "online", 00:13:32.121 "raid_level": "raid1", 00:13:32.121 "superblock": true, 00:13:32.121 "num_base_bdevs": 4, 00:13:32.121 "num_base_bdevs_discovered": 4, 00:13:32.121 "num_base_bdevs_operational": 4, 00:13:32.121 "process": { 00:13:32.121 "type": "rebuild", 00:13:32.121 "target": "spare", 00:13:32.121 "progress": { 00:13:32.121 "blocks": 20480, 00:13:32.121 "percent": 32 00:13:32.121 } 00:13:32.121 }, 00:13:32.121 "base_bdevs_list": [ 00:13:32.121 { 00:13:32.121 "name": "spare", 00:13:32.121 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:32.121 "is_configured": true, 00:13:32.121 "data_offset": 2048, 00:13:32.121 "data_size": 63488 00:13:32.121 }, 00:13:32.121 { 00:13:32.121 "name": "BaseBdev2", 00:13:32.121 "uuid": "20f1510e-61a5-5f51-86c8-2137fbd9dcbc", 00:13:32.121 "is_configured": true, 00:13:32.121 "data_offset": 2048, 00:13:32.121 "data_size": 63488 00:13:32.121 }, 00:13:32.121 { 00:13:32.121 "name": "BaseBdev3", 00:13:32.121 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:32.121 "is_configured": true, 00:13:32.121 "data_offset": 2048, 00:13:32.121 "data_size": 63488 00:13:32.121 }, 00:13:32.121 { 00:13:32.121 "name": "BaseBdev4", 00:13:32.121 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:32.121 "is_configured": true, 00:13:32.121 "data_offset": 2048, 00:13:32.121 "data_size": 63488 00:13:32.121 } 00:13:32.121 ] 00:13:32.121 }' 00:13:32.121 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.381 [2024-11-26 21:20:50.333272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.381 [2024-11-26 21:20:50.402894] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.381 [2024-11-26 21:20:50.402969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.381 [2024-11-26 21:20:50.402986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.381 [2024-11-26 21:20:50.402996] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.381 "name": "raid_bdev1", 00:13:32.381 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:32.381 "strip_size_kb": 0, 00:13:32.381 "state": "online", 00:13:32.381 "raid_level": "raid1", 00:13:32.381 "superblock": true, 00:13:32.381 "num_base_bdevs": 4, 00:13:32.381 "num_base_bdevs_discovered": 3, 00:13:32.381 "num_base_bdevs_operational": 3, 00:13:32.381 "base_bdevs_list": [ 00:13:32.381 { 00:13:32.381 "name": null, 00:13:32.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.381 "is_configured": false, 00:13:32.381 "data_offset": 0, 00:13:32.381 "data_size": 63488 00:13:32.381 }, 00:13:32.381 { 00:13:32.381 "name": "BaseBdev2", 00:13:32.381 "uuid": "20f1510e-61a5-5f51-86c8-2137fbd9dcbc", 00:13:32.381 "is_configured": true, 00:13:32.381 "data_offset": 2048, 00:13:32.381 "data_size": 63488 00:13:32.381 }, 00:13:32.381 { 00:13:32.381 "name": "BaseBdev3", 00:13:32.381 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:32.381 "is_configured": true, 00:13:32.381 "data_offset": 2048, 00:13:32.381 "data_size": 63488 00:13:32.381 }, 00:13:32.381 { 00:13:32.381 "name": "BaseBdev4", 00:13:32.381 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:32.381 "is_configured": true, 00:13:32.381 "data_offset": 2048, 00:13:32.381 "data_size": 63488 00:13:32.381 } 00:13:32.381 ] 00:13:32.381 }' 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.381 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.951 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.951 "name": "raid_bdev1", 00:13:32.951 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:32.951 "strip_size_kb": 0, 00:13:32.951 "state": "online", 00:13:32.951 "raid_level": "raid1", 00:13:32.951 "superblock": true, 00:13:32.951 "num_base_bdevs": 4, 00:13:32.951 "num_base_bdevs_discovered": 3, 00:13:32.951 "num_base_bdevs_operational": 3, 00:13:32.951 "base_bdevs_list": [ 00:13:32.951 { 00:13:32.951 "name": null, 00:13:32.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.951 "is_configured": false, 00:13:32.951 "data_offset": 0, 00:13:32.951 "data_size": 63488 00:13:32.951 }, 00:13:32.952 { 00:13:32.952 "name": "BaseBdev2", 00:13:32.952 "uuid": "20f1510e-61a5-5f51-86c8-2137fbd9dcbc", 00:13:32.952 "is_configured": true, 00:13:32.952 "data_offset": 2048, 00:13:32.952 "data_size": 63488 00:13:32.952 }, 00:13:32.952 { 00:13:32.952 "name": "BaseBdev3", 00:13:32.952 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:32.952 "is_configured": true, 00:13:32.952 "data_offset": 2048, 00:13:32.952 "data_size": 63488 00:13:32.952 }, 00:13:32.952 { 00:13:32.952 "name": "BaseBdev4", 00:13:32.952 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:32.952 "is_configured": true, 00:13:32.952 "data_offset": 2048, 00:13:32.952 "data_size": 63488 00:13:32.952 } 00:13:32.952 ] 00:13:32.952 }' 00:13:32.952 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.952 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.952 21:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.952 21:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.952 21:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.952 21:20:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.952 21:20:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.952 [2024-11-26 21:20:51.021480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.952 [2024-11-26 21:20:51.035284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:32.952 21:20:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.952 21:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:32.952 [2024-11-26 21:20:51.037456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.892 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.892 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.892 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.892 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.892 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.152 "name": "raid_bdev1", 00:13:34.152 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:34.152 "strip_size_kb": 0, 00:13:34.152 "state": "online", 00:13:34.152 "raid_level": "raid1", 00:13:34.152 "superblock": true, 00:13:34.152 "num_base_bdevs": 4, 00:13:34.152 "num_base_bdevs_discovered": 4, 00:13:34.152 "num_base_bdevs_operational": 4, 00:13:34.152 "process": { 00:13:34.152 "type": "rebuild", 00:13:34.152 "target": "spare", 00:13:34.152 "progress": { 00:13:34.152 "blocks": 20480, 00:13:34.152 "percent": 32 00:13:34.152 } 00:13:34.152 }, 00:13:34.152 "base_bdevs_list": [ 00:13:34.152 { 00:13:34.152 "name": "spare", 00:13:34.152 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:34.152 "is_configured": true, 00:13:34.152 "data_offset": 2048, 00:13:34.152 "data_size": 63488 00:13:34.152 }, 00:13:34.152 { 00:13:34.152 "name": "BaseBdev2", 00:13:34.152 "uuid": "20f1510e-61a5-5f51-86c8-2137fbd9dcbc", 00:13:34.152 "is_configured": true, 00:13:34.152 "data_offset": 2048, 00:13:34.152 "data_size": 63488 00:13:34.152 }, 00:13:34.152 { 00:13:34.152 "name": "BaseBdev3", 00:13:34.152 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:34.152 "is_configured": true, 00:13:34.152 "data_offset": 2048, 00:13:34.152 "data_size": 63488 00:13:34.152 }, 00:13:34.152 { 00:13:34.152 "name": "BaseBdev4", 00:13:34.152 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:34.152 "is_configured": true, 00:13:34.152 "data_offset": 2048, 00:13:34.152 "data_size": 63488 00:13:34.152 } 00:13:34.152 ] 00:13:34.152 }' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:34.152 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.152 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.152 [2024-11-26 21:20:52.156749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:34.413 [2024-11-26 21:20:52.346171] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.413 "name": "raid_bdev1", 00:13:34.413 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:34.413 "strip_size_kb": 0, 00:13:34.413 "state": "online", 00:13:34.413 "raid_level": "raid1", 00:13:34.413 "superblock": true, 00:13:34.413 "num_base_bdevs": 4, 00:13:34.413 "num_base_bdevs_discovered": 3, 00:13:34.413 "num_base_bdevs_operational": 3, 00:13:34.413 "process": { 00:13:34.413 "type": "rebuild", 00:13:34.413 "target": "spare", 00:13:34.413 "progress": { 00:13:34.413 "blocks": 24576, 00:13:34.413 "percent": 38 00:13:34.413 } 00:13:34.413 }, 00:13:34.413 "base_bdevs_list": [ 00:13:34.413 { 00:13:34.413 "name": "spare", 00:13:34.413 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:34.413 "is_configured": true, 00:13:34.413 "data_offset": 2048, 00:13:34.413 "data_size": 63488 00:13:34.413 }, 00:13:34.413 { 00:13:34.413 "name": null, 00:13:34.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.413 "is_configured": false, 00:13:34.413 "data_offset": 0, 00:13:34.413 "data_size": 63488 00:13:34.413 }, 00:13:34.413 { 00:13:34.413 "name": "BaseBdev3", 00:13:34.413 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:34.413 "is_configured": true, 00:13:34.413 "data_offset": 2048, 00:13:34.413 "data_size": 63488 00:13:34.413 }, 00:13:34.413 { 00:13:34.413 "name": "BaseBdev4", 00:13:34.413 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:34.413 "is_configured": true, 00:13:34.413 "data_offset": 2048, 00:13:34.413 "data_size": 63488 00:13:34.413 } 00:13:34.413 ] 00:13:34.413 }' 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=454 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.413 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.413 "name": "raid_bdev1", 00:13:34.413 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:34.413 "strip_size_kb": 0, 00:13:34.413 "state": "online", 00:13:34.413 "raid_level": "raid1", 00:13:34.413 "superblock": true, 00:13:34.413 "num_base_bdevs": 4, 00:13:34.413 "num_base_bdevs_discovered": 3, 00:13:34.413 "num_base_bdevs_operational": 3, 00:13:34.413 "process": { 00:13:34.413 "type": "rebuild", 00:13:34.413 "target": "spare", 00:13:34.413 "progress": { 00:13:34.413 "blocks": 26624, 00:13:34.413 "percent": 41 00:13:34.413 } 00:13:34.413 }, 00:13:34.413 "base_bdevs_list": [ 00:13:34.413 { 00:13:34.413 "name": "spare", 00:13:34.414 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:34.414 "is_configured": true, 00:13:34.414 "data_offset": 2048, 00:13:34.414 "data_size": 63488 00:13:34.414 }, 00:13:34.414 { 00:13:34.414 "name": null, 00:13:34.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.414 "is_configured": false, 00:13:34.414 "data_offset": 0, 00:13:34.414 "data_size": 63488 00:13:34.414 }, 00:13:34.414 { 00:13:34.414 "name": "BaseBdev3", 00:13:34.414 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:34.414 "is_configured": true, 00:13:34.414 "data_offset": 2048, 00:13:34.414 "data_size": 63488 00:13:34.414 }, 00:13:34.414 { 00:13:34.414 "name": "BaseBdev4", 00:13:34.414 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:34.414 "is_configured": true, 00:13:34.414 "data_offset": 2048, 00:13:34.414 "data_size": 63488 00:13:34.414 } 00:13:34.414 ] 00:13:34.414 }' 00:13:34.414 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.414 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.414 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.674 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.674 21:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.613 "name": "raid_bdev1", 00:13:35.613 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:35.613 "strip_size_kb": 0, 00:13:35.613 "state": "online", 00:13:35.613 "raid_level": "raid1", 00:13:35.613 "superblock": true, 00:13:35.613 "num_base_bdevs": 4, 00:13:35.613 "num_base_bdevs_discovered": 3, 00:13:35.613 "num_base_bdevs_operational": 3, 00:13:35.613 "process": { 00:13:35.613 "type": "rebuild", 00:13:35.613 "target": "spare", 00:13:35.613 "progress": { 00:13:35.613 "blocks": 49152, 00:13:35.613 "percent": 77 00:13:35.613 } 00:13:35.613 }, 00:13:35.613 "base_bdevs_list": [ 00:13:35.613 { 00:13:35.613 "name": "spare", 00:13:35.613 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:35.613 "is_configured": true, 00:13:35.613 "data_offset": 2048, 00:13:35.613 "data_size": 63488 00:13:35.613 }, 00:13:35.613 { 00:13:35.613 "name": null, 00:13:35.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.613 "is_configured": false, 00:13:35.613 "data_offset": 0, 00:13:35.613 "data_size": 63488 00:13:35.613 }, 00:13:35.613 { 00:13:35.613 "name": "BaseBdev3", 00:13:35.613 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:35.613 "is_configured": true, 00:13:35.613 "data_offset": 2048, 00:13:35.613 "data_size": 63488 00:13:35.613 }, 00:13:35.613 { 00:13:35.613 "name": "BaseBdev4", 00:13:35.613 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:35.613 "is_configured": true, 00:13:35.613 "data_offset": 2048, 00:13:35.613 "data_size": 63488 00:13:35.613 } 00:13:35.613 ] 00:13:35.613 }' 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.613 21:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.183 [2024-11-26 21:20:54.260295] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:36.183 [2024-11-26 21:20:54.260389] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:36.183 [2024-11-26 21:20:54.260512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.753 "name": "raid_bdev1", 00:13:36.753 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:36.753 "strip_size_kb": 0, 00:13:36.753 "state": "online", 00:13:36.753 "raid_level": "raid1", 00:13:36.753 "superblock": true, 00:13:36.753 "num_base_bdevs": 4, 00:13:36.753 "num_base_bdevs_discovered": 3, 00:13:36.753 "num_base_bdevs_operational": 3, 00:13:36.753 "base_bdevs_list": [ 00:13:36.753 { 00:13:36.753 "name": "spare", 00:13:36.753 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:36.753 "is_configured": true, 00:13:36.753 "data_offset": 2048, 00:13:36.753 "data_size": 63488 00:13:36.753 }, 00:13:36.753 { 00:13:36.753 "name": null, 00:13:36.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.753 "is_configured": false, 00:13:36.753 "data_offset": 0, 00:13:36.753 "data_size": 63488 00:13:36.753 }, 00:13:36.753 { 00:13:36.753 "name": "BaseBdev3", 00:13:36.753 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:36.753 "is_configured": true, 00:13:36.753 "data_offset": 2048, 00:13:36.753 "data_size": 63488 00:13:36.753 }, 00:13:36.753 { 00:13:36.753 "name": "BaseBdev4", 00:13:36.753 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:36.753 "is_configured": true, 00:13:36.753 "data_offset": 2048, 00:13:36.753 "data_size": 63488 00:13:36.753 } 00:13:36.753 ] 00:13:36.753 }' 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.753 "name": "raid_bdev1", 00:13:36.753 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:36.753 "strip_size_kb": 0, 00:13:36.753 "state": "online", 00:13:36.753 "raid_level": "raid1", 00:13:36.753 "superblock": true, 00:13:36.753 "num_base_bdevs": 4, 00:13:36.753 "num_base_bdevs_discovered": 3, 00:13:36.753 "num_base_bdevs_operational": 3, 00:13:36.753 "base_bdevs_list": [ 00:13:36.753 { 00:13:36.753 "name": "spare", 00:13:36.753 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:36.753 "is_configured": true, 00:13:36.753 "data_offset": 2048, 00:13:36.753 "data_size": 63488 00:13:36.753 }, 00:13:36.753 { 00:13:36.753 "name": null, 00:13:36.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.753 "is_configured": false, 00:13:36.753 "data_offset": 0, 00:13:36.753 "data_size": 63488 00:13:36.753 }, 00:13:36.753 { 00:13:36.753 "name": "BaseBdev3", 00:13:36.753 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:36.753 "is_configured": true, 00:13:36.753 "data_offset": 2048, 00:13:36.753 "data_size": 63488 00:13:36.753 }, 00:13:36.753 { 00:13:36.753 "name": "BaseBdev4", 00:13:36.753 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:36.753 "is_configured": true, 00:13:36.753 "data_offset": 2048, 00:13:36.753 "data_size": 63488 00:13:36.753 } 00:13:36.753 ] 00:13:36.753 }' 00:13:36.753 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.013 21:20:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.013 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.013 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.013 "name": "raid_bdev1", 00:13:37.013 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:37.013 "strip_size_kb": 0, 00:13:37.013 "state": "online", 00:13:37.013 "raid_level": "raid1", 00:13:37.013 "superblock": true, 00:13:37.013 "num_base_bdevs": 4, 00:13:37.013 "num_base_bdevs_discovered": 3, 00:13:37.013 "num_base_bdevs_operational": 3, 00:13:37.013 "base_bdevs_list": [ 00:13:37.013 { 00:13:37.013 "name": "spare", 00:13:37.013 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:37.013 "is_configured": true, 00:13:37.013 "data_offset": 2048, 00:13:37.013 "data_size": 63488 00:13:37.013 }, 00:13:37.013 { 00:13:37.013 "name": null, 00:13:37.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.013 "is_configured": false, 00:13:37.014 "data_offset": 0, 00:13:37.014 "data_size": 63488 00:13:37.014 }, 00:13:37.014 { 00:13:37.014 "name": "BaseBdev3", 00:13:37.014 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:37.014 "is_configured": true, 00:13:37.014 "data_offset": 2048, 00:13:37.014 "data_size": 63488 00:13:37.014 }, 00:13:37.014 { 00:13:37.014 "name": "BaseBdev4", 00:13:37.014 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:37.014 "is_configured": true, 00:13:37.014 "data_offset": 2048, 00:13:37.014 "data_size": 63488 00:13:37.014 } 00:13:37.014 ] 00:13:37.014 }' 00:13:37.014 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.014 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.598 [2024-11-26 21:20:55.444753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.598 [2024-11-26 21:20:55.444803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.598 [2024-11-26 21:20:55.444910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.598 [2024-11-26 21:20:55.445015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.598 [2024-11-26 21:20:55.445027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:37.598 /dev/nbd0 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.598 1+0 records in 00:13:37.598 1+0 records out 00:13:37.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360795 s, 11.4 MB/s 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.598 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:37.859 /dev/nbd1 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.859 1+0 records in 00:13:37.859 1+0 records out 00:13:37.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361486 s, 11.3 MB/s 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.859 21:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.118 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.378 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:38.638 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.638 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.638 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.638 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.638 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.639 [2024-11-26 21:20:56.600791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.639 [2024-11-26 21:20:56.600857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.639 [2024-11-26 21:20:56.600882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:38.639 [2024-11-26 21:20:56.600892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.639 [2024-11-26 21:20:56.603328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.639 [2024-11-26 21:20:56.603365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.639 [2024-11-26 21:20:56.603461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:38.639 [2024-11-26 21:20:56.603516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.639 [2024-11-26 21:20:56.603657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.639 [2024-11-26 21:20:56.603748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.639 spare 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.639 [2024-11-26 21:20:56.703646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:38.639 [2024-11-26 21:20:56.703676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.639 [2024-11-26 21:20:56.704014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:38.639 [2024-11-26 21:20:56.704260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:38.639 [2024-11-26 21:20:56.704280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:38.639 [2024-11-26 21:20:56.704468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.639 "name": "raid_bdev1", 00:13:38.639 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:38.639 "strip_size_kb": 0, 00:13:38.639 "state": "online", 00:13:38.639 "raid_level": "raid1", 00:13:38.639 "superblock": true, 00:13:38.639 "num_base_bdevs": 4, 00:13:38.639 "num_base_bdevs_discovered": 3, 00:13:38.639 "num_base_bdevs_operational": 3, 00:13:38.639 "base_bdevs_list": [ 00:13:38.639 { 00:13:38.639 "name": "spare", 00:13:38.639 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:38.639 "is_configured": true, 00:13:38.639 "data_offset": 2048, 00:13:38.639 "data_size": 63488 00:13:38.639 }, 00:13:38.639 { 00:13:38.639 "name": null, 00:13:38.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.639 "is_configured": false, 00:13:38.639 "data_offset": 2048, 00:13:38.639 "data_size": 63488 00:13:38.639 }, 00:13:38.639 { 00:13:38.639 "name": "BaseBdev3", 00:13:38.639 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:38.639 "is_configured": true, 00:13:38.639 "data_offset": 2048, 00:13:38.639 "data_size": 63488 00:13:38.639 }, 00:13:38.639 { 00:13:38.639 "name": "BaseBdev4", 00:13:38.639 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:38.639 "is_configured": true, 00:13:38.639 "data_offset": 2048, 00:13:38.639 "data_size": 63488 00:13:38.639 } 00:13:38.639 ] 00:13:38.639 }' 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.639 21:20:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.207 "name": "raid_bdev1", 00:13:39.207 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:39.207 "strip_size_kb": 0, 00:13:39.207 "state": "online", 00:13:39.207 "raid_level": "raid1", 00:13:39.207 "superblock": true, 00:13:39.207 "num_base_bdevs": 4, 00:13:39.207 "num_base_bdevs_discovered": 3, 00:13:39.207 "num_base_bdevs_operational": 3, 00:13:39.207 "base_bdevs_list": [ 00:13:39.207 { 00:13:39.207 "name": "spare", 00:13:39.207 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:39.207 "is_configured": true, 00:13:39.207 "data_offset": 2048, 00:13:39.207 "data_size": 63488 00:13:39.207 }, 00:13:39.207 { 00:13:39.207 "name": null, 00:13:39.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.207 "is_configured": false, 00:13:39.207 "data_offset": 2048, 00:13:39.207 "data_size": 63488 00:13:39.207 }, 00:13:39.207 { 00:13:39.207 "name": "BaseBdev3", 00:13:39.207 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:39.207 "is_configured": true, 00:13:39.207 "data_offset": 2048, 00:13:39.207 "data_size": 63488 00:13:39.207 }, 00:13:39.207 { 00:13:39.207 "name": "BaseBdev4", 00:13:39.207 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:39.207 "is_configured": true, 00:13:39.207 "data_offset": 2048, 00:13:39.207 "data_size": 63488 00:13:39.207 } 00:13:39.207 ] 00:13:39.207 }' 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.207 [2024-11-26 21:20:57.328316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.207 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.208 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.467 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.467 "name": "raid_bdev1", 00:13:39.467 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:39.467 "strip_size_kb": 0, 00:13:39.467 "state": "online", 00:13:39.467 "raid_level": "raid1", 00:13:39.467 "superblock": true, 00:13:39.467 "num_base_bdevs": 4, 00:13:39.467 "num_base_bdevs_discovered": 2, 00:13:39.467 "num_base_bdevs_operational": 2, 00:13:39.467 "base_bdevs_list": [ 00:13:39.467 { 00:13:39.467 "name": null, 00:13:39.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.467 "is_configured": false, 00:13:39.467 "data_offset": 0, 00:13:39.467 "data_size": 63488 00:13:39.467 }, 00:13:39.467 { 00:13:39.467 "name": null, 00:13:39.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.467 "is_configured": false, 00:13:39.467 "data_offset": 2048, 00:13:39.467 "data_size": 63488 00:13:39.467 }, 00:13:39.467 { 00:13:39.467 "name": "BaseBdev3", 00:13:39.467 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:39.467 "is_configured": true, 00:13:39.467 "data_offset": 2048, 00:13:39.467 "data_size": 63488 00:13:39.467 }, 00:13:39.467 { 00:13:39.467 "name": "BaseBdev4", 00:13:39.467 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:39.467 "is_configured": true, 00:13:39.467 "data_offset": 2048, 00:13:39.467 "data_size": 63488 00:13:39.467 } 00:13:39.467 ] 00:13:39.467 }' 00:13:39.467 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.467 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.727 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.727 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.727 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.727 [2024-11-26 21:20:57.771737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.727 [2024-11-26 21:20:57.772016] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:39.727 [2024-11-26 21:20:57.772033] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:39.727 [2024-11-26 21:20:57.772076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.727 [2024-11-26 21:20:57.786245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:39.727 21:20:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.727 21:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:39.727 [2024-11-26 21:20:57.788421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.667 21:20:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.927 "name": "raid_bdev1", 00:13:40.927 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:40.927 "strip_size_kb": 0, 00:13:40.927 "state": "online", 00:13:40.927 "raid_level": "raid1", 00:13:40.927 "superblock": true, 00:13:40.927 "num_base_bdevs": 4, 00:13:40.927 "num_base_bdevs_discovered": 3, 00:13:40.927 "num_base_bdevs_operational": 3, 00:13:40.927 "process": { 00:13:40.927 "type": "rebuild", 00:13:40.927 "target": "spare", 00:13:40.927 "progress": { 00:13:40.927 "blocks": 20480, 00:13:40.927 "percent": 32 00:13:40.927 } 00:13:40.927 }, 00:13:40.927 "base_bdevs_list": [ 00:13:40.927 { 00:13:40.927 "name": "spare", 00:13:40.927 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:40.927 "is_configured": true, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 }, 00:13:40.927 { 00:13:40.927 "name": null, 00:13:40.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.927 "is_configured": false, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 }, 00:13:40.927 { 00:13:40.927 "name": "BaseBdev3", 00:13:40.927 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:40.927 "is_configured": true, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 }, 00:13:40.927 { 00:13:40.927 "name": "BaseBdev4", 00:13:40.927 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:40.927 "is_configured": true, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 } 00:13:40.927 ] 00:13:40.927 }' 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.927 21:20:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.927 [2024-11-26 21:20:58.931788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.927 [2024-11-26 21:20:58.997320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:40.927 [2024-11-26 21:20:58.997396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.927 [2024-11-26 21:20:58.997417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.927 [2024-11-26 21:20:58.997425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.927 "name": "raid_bdev1", 00:13:40.927 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:40.927 "strip_size_kb": 0, 00:13:40.927 "state": "online", 00:13:40.927 "raid_level": "raid1", 00:13:40.927 "superblock": true, 00:13:40.927 "num_base_bdevs": 4, 00:13:40.927 "num_base_bdevs_discovered": 2, 00:13:40.927 "num_base_bdevs_operational": 2, 00:13:40.927 "base_bdevs_list": [ 00:13:40.927 { 00:13:40.927 "name": null, 00:13:40.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.927 "is_configured": false, 00:13:40.927 "data_offset": 0, 00:13:40.927 "data_size": 63488 00:13:40.927 }, 00:13:40.927 { 00:13:40.927 "name": null, 00:13:40.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.927 "is_configured": false, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 }, 00:13:40.927 { 00:13:40.927 "name": "BaseBdev3", 00:13:40.927 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:40.927 "is_configured": true, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 }, 00:13:40.927 { 00:13:40.927 "name": "BaseBdev4", 00:13:40.927 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:40.927 "is_configured": true, 00:13:40.927 "data_offset": 2048, 00:13:40.927 "data_size": 63488 00:13:40.927 } 00:13:40.927 ] 00:13:40.927 }' 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.927 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.508 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.508 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.508 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.508 [2024-11-26 21:20:59.473992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.509 [2024-11-26 21:20:59.474084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.509 [2024-11-26 21:20:59.474121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:41.509 [2024-11-26 21:20:59.474148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.509 [2024-11-26 21:20:59.474716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.509 [2024-11-26 21:20:59.474746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.509 [2024-11-26 21:20:59.474857] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.509 [2024-11-26 21:20:59.474882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:41.509 [2024-11-26 21:20:59.474900] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:41.509 [2024-11-26 21:20:59.474929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.509 [2024-11-26 21:20:59.489342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:41.509 spare 00:13:41.509 21:20:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.509 21:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:41.509 [2024-11-26 21:20:59.491436] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.450 "name": "raid_bdev1", 00:13:42.450 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:42.450 "strip_size_kb": 0, 00:13:42.450 "state": "online", 00:13:42.450 "raid_level": "raid1", 00:13:42.450 "superblock": true, 00:13:42.450 "num_base_bdevs": 4, 00:13:42.450 "num_base_bdevs_discovered": 3, 00:13:42.450 "num_base_bdevs_operational": 3, 00:13:42.450 "process": { 00:13:42.450 "type": "rebuild", 00:13:42.450 "target": "spare", 00:13:42.450 "progress": { 00:13:42.450 "blocks": 20480, 00:13:42.450 "percent": 32 00:13:42.450 } 00:13:42.450 }, 00:13:42.450 "base_bdevs_list": [ 00:13:42.450 { 00:13:42.450 "name": "spare", 00:13:42.450 "uuid": "5ffc688a-c9ab-55d1-afe0-07c438ddaa17", 00:13:42.450 "is_configured": true, 00:13:42.450 "data_offset": 2048, 00:13:42.450 "data_size": 63488 00:13:42.450 }, 00:13:42.450 { 00:13:42.450 "name": null, 00:13:42.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.450 "is_configured": false, 00:13:42.450 "data_offset": 2048, 00:13:42.450 "data_size": 63488 00:13:42.450 }, 00:13:42.450 { 00:13:42.450 "name": "BaseBdev3", 00:13:42.450 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:42.450 "is_configured": true, 00:13:42.450 "data_offset": 2048, 00:13:42.450 "data_size": 63488 00:13:42.450 }, 00:13:42.450 { 00:13:42.450 "name": "BaseBdev4", 00:13:42.450 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:42.450 "is_configured": true, 00:13:42.450 "data_offset": 2048, 00:13:42.450 "data_size": 63488 00:13:42.450 } 00:13:42.450 ] 00:13:42.450 }' 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.450 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.711 [2024-11-26 21:21:00.631086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.711 [2024-11-26 21:21:00.700600] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:42.711 [2024-11-26 21:21:00.700664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.711 [2024-11-26 21:21:00.700680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.711 [2024-11-26 21:21:00.700690] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.711 "name": "raid_bdev1", 00:13:42.711 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:42.711 "strip_size_kb": 0, 00:13:42.711 "state": "online", 00:13:42.711 "raid_level": "raid1", 00:13:42.711 "superblock": true, 00:13:42.711 "num_base_bdevs": 4, 00:13:42.711 "num_base_bdevs_discovered": 2, 00:13:42.711 "num_base_bdevs_operational": 2, 00:13:42.711 "base_bdevs_list": [ 00:13:42.711 { 00:13:42.711 "name": null, 00:13:42.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.711 "is_configured": false, 00:13:42.711 "data_offset": 0, 00:13:42.711 "data_size": 63488 00:13:42.711 }, 00:13:42.711 { 00:13:42.711 "name": null, 00:13:42.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.711 "is_configured": false, 00:13:42.711 "data_offset": 2048, 00:13:42.711 "data_size": 63488 00:13:42.711 }, 00:13:42.711 { 00:13:42.711 "name": "BaseBdev3", 00:13:42.711 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:42.711 "is_configured": true, 00:13:42.711 "data_offset": 2048, 00:13:42.711 "data_size": 63488 00:13:42.711 }, 00:13:42.711 { 00:13:42.711 "name": "BaseBdev4", 00:13:42.711 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:42.711 "is_configured": true, 00:13:42.711 "data_offset": 2048, 00:13:42.711 "data_size": 63488 00:13:42.711 } 00:13:42.711 ] 00:13:42.711 }' 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.711 21:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.281 "name": "raid_bdev1", 00:13:43.281 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:43.281 "strip_size_kb": 0, 00:13:43.281 "state": "online", 00:13:43.281 "raid_level": "raid1", 00:13:43.281 "superblock": true, 00:13:43.281 "num_base_bdevs": 4, 00:13:43.281 "num_base_bdevs_discovered": 2, 00:13:43.281 "num_base_bdevs_operational": 2, 00:13:43.281 "base_bdevs_list": [ 00:13:43.281 { 00:13:43.281 "name": null, 00:13:43.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.281 "is_configured": false, 00:13:43.281 "data_offset": 0, 00:13:43.281 "data_size": 63488 00:13:43.281 }, 00:13:43.281 { 00:13:43.281 "name": null, 00:13:43.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.281 "is_configured": false, 00:13:43.281 "data_offset": 2048, 00:13:43.281 "data_size": 63488 00:13:43.281 }, 00:13:43.281 { 00:13:43.281 "name": "BaseBdev3", 00:13:43.281 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:43.281 "is_configured": true, 00:13:43.281 "data_offset": 2048, 00:13:43.281 "data_size": 63488 00:13:43.281 }, 00:13:43.281 { 00:13:43.281 "name": "BaseBdev4", 00:13:43.281 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:43.281 "is_configured": true, 00:13:43.281 "data_offset": 2048, 00:13:43.281 "data_size": 63488 00:13:43.281 } 00:13:43.281 ] 00:13:43.281 }' 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.281 [2024-11-26 21:21:01.338561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:43.281 [2024-11-26 21:21:01.338638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.281 [2024-11-26 21:21:01.338661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:43.281 [2024-11-26 21:21:01.338673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.281 [2024-11-26 21:21:01.339272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.281 [2024-11-26 21:21:01.339302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.281 [2024-11-26 21:21:01.339397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:43.281 [2024-11-26 21:21:01.339421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:43.281 [2024-11-26 21:21:01.339431] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:43.281 [2024-11-26 21:21:01.339461] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:43.281 BaseBdev1 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.281 21:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.222 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.482 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.482 "name": "raid_bdev1", 00:13:44.482 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:44.482 "strip_size_kb": 0, 00:13:44.482 "state": "online", 00:13:44.482 "raid_level": "raid1", 00:13:44.482 "superblock": true, 00:13:44.482 "num_base_bdevs": 4, 00:13:44.482 "num_base_bdevs_discovered": 2, 00:13:44.482 "num_base_bdevs_operational": 2, 00:13:44.482 "base_bdevs_list": [ 00:13:44.482 { 00:13:44.482 "name": null, 00:13:44.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.482 "is_configured": false, 00:13:44.482 "data_offset": 0, 00:13:44.482 "data_size": 63488 00:13:44.482 }, 00:13:44.482 { 00:13:44.482 "name": null, 00:13:44.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.482 "is_configured": false, 00:13:44.482 "data_offset": 2048, 00:13:44.482 "data_size": 63488 00:13:44.482 }, 00:13:44.482 { 00:13:44.482 "name": "BaseBdev3", 00:13:44.482 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:44.482 "is_configured": true, 00:13:44.482 "data_offset": 2048, 00:13:44.482 "data_size": 63488 00:13:44.482 }, 00:13:44.482 { 00:13:44.482 "name": "BaseBdev4", 00:13:44.482 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:44.482 "is_configured": true, 00:13:44.482 "data_offset": 2048, 00:13:44.482 "data_size": 63488 00:13:44.482 } 00:13:44.482 ] 00:13:44.482 }' 00:13:44.482 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.482 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.742 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.742 "name": "raid_bdev1", 00:13:44.742 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:44.742 "strip_size_kb": 0, 00:13:44.742 "state": "online", 00:13:44.742 "raid_level": "raid1", 00:13:44.742 "superblock": true, 00:13:44.742 "num_base_bdevs": 4, 00:13:44.742 "num_base_bdevs_discovered": 2, 00:13:44.742 "num_base_bdevs_operational": 2, 00:13:44.742 "base_bdevs_list": [ 00:13:44.742 { 00:13:44.742 "name": null, 00:13:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.742 "is_configured": false, 00:13:44.742 "data_offset": 0, 00:13:44.742 "data_size": 63488 00:13:44.743 }, 00:13:44.743 { 00:13:44.743 "name": null, 00:13:44.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.743 "is_configured": false, 00:13:44.743 "data_offset": 2048, 00:13:44.743 "data_size": 63488 00:13:44.743 }, 00:13:44.743 { 00:13:44.743 "name": "BaseBdev3", 00:13:44.743 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:44.743 "is_configured": true, 00:13:44.743 "data_offset": 2048, 00:13:44.743 "data_size": 63488 00:13:44.743 }, 00:13:44.743 { 00:13:44.743 "name": "BaseBdev4", 00:13:44.743 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:44.743 "is_configured": true, 00:13:44.743 "data_offset": 2048, 00:13:44.743 "data_size": 63488 00:13:44.743 } 00:13:44.743 ] 00:13:44.743 }' 00:13:44.743 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.003 [2024-11-26 21:21:02.971805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.003 [2024-11-26 21:21:02.972085] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:45.003 [2024-11-26 21:21:02.972111] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:45.003 request: 00:13:45.003 { 00:13:45.003 "base_bdev": "BaseBdev1", 00:13:45.003 "raid_bdev": "raid_bdev1", 00:13:45.003 "method": "bdev_raid_add_base_bdev", 00:13:45.003 "req_id": 1 00:13:45.003 } 00:13:45.003 Got JSON-RPC error response 00:13:45.003 response: 00:13:45.003 { 00:13:45.003 "code": -22, 00:13:45.003 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:45.003 } 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.003 21:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.942 21:21:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.942 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.942 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.942 "name": "raid_bdev1", 00:13:45.942 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:45.942 "strip_size_kb": 0, 00:13:45.942 "state": "online", 00:13:45.942 "raid_level": "raid1", 00:13:45.942 "superblock": true, 00:13:45.942 "num_base_bdevs": 4, 00:13:45.942 "num_base_bdevs_discovered": 2, 00:13:45.942 "num_base_bdevs_operational": 2, 00:13:45.942 "base_bdevs_list": [ 00:13:45.942 { 00:13:45.942 "name": null, 00:13:45.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.942 "is_configured": false, 00:13:45.942 "data_offset": 0, 00:13:45.942 "data_size": 63488 00:13:45.942 }, 00:13:45.942 { 00:13:45.942 "name": null, 00:13:45.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.942 "is_configured": false, 00:13:45.942 "data_offset": 2048, 00:13:45.942 "data_size": 63488 00:13:45.942 }, 00:13:45.942 { 00:13:45.942 "name": "BaseBdev3", 00:13:45.942 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:45.942 "is_configured": true, 00:13:45.942 "data_offset": 2048, 00:13:45.942 "data_size": 63488 00:13:45.942 }, 00:13:45.942 { 00:13:45.942 "name": "BaseBdev4", 00:13:45.942 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:45.942 "is_configured": true, 00:13:45.942 "data_offset": 2048, 00:13:45.942 "data_size": 63488 00:13:45.942 } 00:13:45.942 ] 00:13:45.942 }' 00:13:45.942 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.942 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.537 "name": "raid_bdev1", 00:13:46.537 "uuid": "48f14543-db9e-44b5-b3da-002b2b161349", 00:13:46.537 "strip_size_kb": 0, 00:13:46.537 "state": "online", 00:13:46.537 "raid_level": "raid1", 00:13:46.537 "superblock": true, 00:13:46.537 "num_base_bdevs": 4, 00:13:46.537 "num_base_bdevs_discovered": 2, 00:13:46.537 "num_base_bdevs_operational": 2, 00:13:46.537 "base_bdevs_list": [ 00:13:46.537 { 00:13:46.537 "name": null, 00:13:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.537 "is_configured": false, 00:13:46.537 "data_offset": 0, 00:13:46.537 "data_size": 63488 00:13:46.537 }, 00:13:46.537 { 00:13:46.537 "name": null, 00:13:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.537 "is_configured": false, 00:13:46.537 "data_offset": 2048, 00:13:46.537 "data_size": 63488 00:13:46.537 }, 00:13:46.537 { 00:13:46.537 "name": "BaseBdev3", 00:13:46.537 "uuid": "8eb9adaa-df48-5a3c-ad2d-1565185aac5c", 00:13:46.537 "is_configured": true, 00:13:46.537 "data_offset": 2048, 00:13:46.537 "data_size": 63488 00:13:46.537 }, 00:13:46.537 { 00:13:46.537 "name": "BaseBdev4", 00:13:46.537 "uuid": "55fc931e-8ea8-5465-85cd-229a06d7152d", 00:13:46.537 "is_configured": true, 00:13:46.537 "data_offset": 2048, 00:13:46.537 "data_size": 63488 00:13:46.537 } 00:13:46.537 ] 00:13:46.537 }' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77775 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77775 ']' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77775 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77775 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.537 killing process with pid 77775 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77775' 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77775 00:13:46.537 Received shutdown signal, test time was about 60.000000 seconds 00:13:46.537 00:13:46.537 Latency(us) 00:13:46.537 [2024-11-26T21:21:04.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.537 [2024-11-26T21:21:04.693Z] =================================================================================================================== 00:13:46.537 [2024-11-26T21:21:04.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:46.537 [2024-11-26 21:21:04.636682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.537 21:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77775 00:13:46.537 [2024-11-26 21:21:04.636842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.537 [2024-11-26 21:21:04.636937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.537 [2024-11-26 21:21:04.636954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:47.107 [2024-11-26 21:21:05.181522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:48.489 00:13:48.489 real 0m25.472s 00:13:48.489 user 0m30.178s 00:13:48.489 sys 0m4.040s 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.489 ************************************ 00:13:48.489 END TEST raid_rebuild_test_sb 00:13:48.489 ************************************ 00:13:48.489 21:21:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:48.489 21:21:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:48.489 21:21:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.489 21:21:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.489 ************************************ 00:13:48.489 START TEST raid_rebuild_test_io 00:13:48.489 ************************************ 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78537 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78537 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78537 ']' 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.489 21:21:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:48.489 Zero copy mechanism will not be used. 00:13:48.489 [2024-11-26 21:21:06.579095] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:48.489 [2024-11-26 21:21:06.579236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78537 ] 00:13:48.749 [2024-11-26 21:21:06.751798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.749 [2024-11-26 21:21:06.883520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.008 [2024-11-26 21:21:07.134731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.008 [2024-11-26 21:21:07.134808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.577 BaseBdev1_malloc 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.577 [2024-11-26 21:21:07.482431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.577 [2024-11-26 21:21:07.482512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.577 [2024-11-26 21:21:07.482539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:49.577 [2024-11-26 21:21:07.482552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.577 [2024-11-26 21:21:07.484948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.577 [2024-11-26 21:21:07.485004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.577 BaseBdev1 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.577 BaseBdev2_malloc 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.577 [2024-11-26 21:21:07.543497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:49.577 [2024-11-26 21:21:07.543669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.577 [2024-11-26 21:21:07.543701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:49.577 [2024-11-26 21:21:07.543713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.577 [2024-11-26 21:21:07.546178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.577 [2024-11-26 21:21:07.546218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:49.577 BaseBdev2 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.577 BaseBdev3_malloc 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.577 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 [2024-11-26 21:21:07.617073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:49.578 [2024-11-26 21:21:07.617144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.578 [2024-11-26 21:21:07.617173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:49.578 [2024-11-26 21:21:07.617185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.578 [2024-11-26 21:21:07.619611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.578 [2024-11-26 21:21:07.619656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:49.578 BaseBdev3 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 BaseBdev4_malloc 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 [2024-11-26 21:21:07.678122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:49.578 [2024-11-26 21:21:07.678191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.578 [2024-11-26 21:21:07.678214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:49.578 [2024-11-26 21:21:07.678225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.578 [2024-11-26 21:21:07.680495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.578 [2024-11-26 21:21:07.680536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:49.578 BaseBdev4 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.578 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.837 spare_malloc 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.837 spare_delay 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.837 [2024-11-26 21:21:07.751054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:49.837 [2024-11-26 21:21:07.751105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.837 [2024-11-26 21:21:07.751123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:49.837 [2024-11-26 21:21:07.751134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.837 [2024-11-26 21:21:07.753444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.837 [2024-11-26 21:21:07.753578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:49.837 spare 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.837 [2024-11-26 21:21:07.763143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.837 [2024-11-26 21:21:07.765187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.837 [2024-11-26 21:21:07.765252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.837 [2024-11-26 21:21:07.765301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:49.837 [2024-11-26 21:21:07.765387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:49.837 [2024-11-26 21:21:07.765400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:49.837 [2024-11-26 21:21:07.765637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:49.837 [2024-11-26 21:21:07.765800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:49.837 [2024-11-26 21:21:07.765812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:49.837 [2024-11-26 21:21:07.765953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.837 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.837 "name": "raid_bdev1", 00:13:49.837 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:49.837 "strip_size_kb": 0, 00:13:49.837 "state": "online", 00:13:49.837 "raid_level": "raid1", 00:13:49.837 "superblock": false, 00:13:49.837 "num_base_bdevs": 4, 00:13:49.837 "num_base_bdevs_discovered": 4, 00:13:49.837 "num_base_bdevs_operational": 4, 00:13:49.838 "base_bdevs_list": [ 00:13:49.838 { 00:13:49.838 "name": "BaseBdev1", 00:13:49.838 "uuid": "e8ca30ce-2a0c-5d67-9908-874c87245a9f", 00:13:49.838 "is_configured": true, 00:13:49.838 "data_offset": 0, 00:13:49.838 "data_size": 65536 00:13:49.838 }, 00:13:49.838 { 00:13:49.838 "name": "BaseBdev2", 00:13:49.838 "uuid": "1cb0e096-68bd-52d9-bc45-5b8ef65a094b", 00:13:49.838 "is_configured": true, 00:13:49.838 "data_offset": 0, 00:13:49.838 "data_size": 65536 00:13:49.838 }, 00:13:49.838 { 00:13:49.838 "name": "BaseBdev3", 00:13:49.838 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:49.838 "is_configured": true, 00:13:49.838 "data_offset": 0, 00:13:49.838 "data_size": 65536 00:13:49.838 }, 00:13:49.838 { 00:13:49.838 "name": "BaseBdev4", 00:13:49.838 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:49.838 "is_configured": true, 00:13:49.838 "data_offset": 0, 00:13:49.838 "data_size": 65536 00:13:49.838 } 00:13:49.838 ] 00:13:49.838 }' 00:13:49.838 21:21:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.838 21:21:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.097 [2024-11-26 21:21:08.190741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.097 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.358 [2024-11-26 21:21:08.266242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.358 "name": "raid_bdev1", 00:13:50.358 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:50.358 "strip_size_kb": 0, 00:13:50.358 "state": "online", 00:13:50.358 "raid_level": "raid1", 00:13:50.358 "superblock": false, 00:13:50.358 "num_base_bdevs": 4, 00:13:50.358 "num_base_bdevs_discovered": 3, 00:13:50.358 "num_base_bdevs_operational": 3, 00:13:50.358 "base_bdevs_list": [ 00:13:50.358 { 00:13:50.358 "name": null, 00:13:50.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.358 "is_configured": false, 00:13:50.358 "data_offset": 0, 00:13:50.358 "data_size": 65536 00:13:50.358 }, 00:13:50.358 { 00:13:50.358 "name": "BaseBdev2", 00:13:50.358 "uuid": "1cb0e096-68bd-52d9-bc45-5b8ef65a094b", 00:13:50.358 "is_configured": true, 00:13:50.358 "data_offset": 0, 00:13:50.358 "data_size": 65536 00:13:50.358 }, 00:13:50.358 { 00:13:50.358 "name": "BaseBdev3", 00:13:50.358 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:50.358 "is_configured": true, 00:13:50.358 "data_offset": 0, 00:13:50.358 "data_size": 65536 00:13:50.358 }, 00:13:50.358 { 00:13:50.358 "name": "BaseBdev4", 00:13:50.358 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:50.358 "is_configured": true, 00:13:50.358 "data_offset": 0, 00:13:50.358 "data_size": 65536 00:13:50.358 } 00:13:50.358 ] 00:13:50.358 }' 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.358 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.358 [2024-11-26 21:21:08.355370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:50.358 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:50.358 Zero copy mechanism will not be used. 00:13:50.358 Running I/O for 60 seconds... 00:13:50.618 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:50.618 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.618 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.618 [2024-11-26 21:21:08.669649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.618 21:21:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.618 21:21:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:50.618 [2024-11-26 21:21:08.730543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:50.618 [2024-11-26 21:21:08.732846] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.877 [2024-11-26 21:21:09.023351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:50.878 [2024-11-26 21:21:09.024633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.457 158.00 IOPS, 474.00 MiB/s [2024-11-26T21:21:09.613Z] [2024-11-26 21:21:09.380528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:51.457 [2024-11-26 21:21:09.500468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.717 "name": "raid_bdev1", 00:13:51.717 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:51.717 "strip_size_kb": 0, 00:13:51.717 "state": "online", 00:13:51.717 "raid_level": "raid1", 00:13:51.717 "superblock": false, 00:13:51.717 "num_base_bdevs": 4, 00:13:51.717 "num_base_bdevs_discovered": 4, 00:13:51.717 "num_base_bdevs_operational": 4, 00:13:51.717 "process": { 00:13:51.717 "type": "rebuild", 00:13:51.717 "target": "spare", 00:13:51.717 "progress": { 00:13:51.717 "blocks": 12288, 00:13:51.717 "percent": 18 00:13:51.717 } 00:13:51.717 }, 00:13:51.717 "base_bdevs_list": [ 00:13:51.717 { 00:13:51.717 "name": "spare", 00:13:51.717 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:51.717 "is_configured": true, 00:13:51.717 "data_offset": 0, 00:13:51.717 "data_size": 65536 00:13:51.717 }, 00:13:51.717 { 00:13:51.717 "name": "BaseBdev2", 00:13:51.717 "uuid": "1cb0e096-68bd-52d9-bc45-5b8ef65a094b", 00:13:51.717 "is_configured": true, 00:13:51.717 "data_offset": 0, 00:13:51.717 "data_size": 65536 00:13:51.717 }, 00:13:51.717 { 00:13:51.717 "name": "BaseBdev3", 00:13:51.717 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:51.717 "is_configured": true, 00:13:51.717 "data_offset": 0, 00:13:51.717 "data_size": 65536 00:13:51.717 }, 00:13:51.717 { 00:13:51.717 "name": "BaseBdev4", 00:13:51.717 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:51.717 "is_configured": true, 00:13:51.717 "data_offset": 0, 00:13:51.717 "data_size": 65536 00:13:51.717 } 00:13:51.717 ] 00:13:51.717 }' 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.717 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.978 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.978 21:21:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.978 21:21:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.978 21:21:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.978 [2024-11-26 21:21:09.883991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.978 [2024-11-26 21:21:09.977196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:51.978 [2024-11-26 21:21:09.997209] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:51.978 [2024-11-26 21:21:10.012271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.978 [2024-11-26 21:21:10.012390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.978 [2024-11-26 21:21:10.012410] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:51.978 [2024-11-26 21:21:10.043725] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.978 "name": "raid_bdev1", 00:13:51.978 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:51.978 "strip_size_kb": 0, 00:13:51.978 "state": "online", 00:13:51.978 "raid_level": "raid1", 00:13:51.978 "superblock": false, 00:13:51.978 "num_base_bdevs": 4, 00:13:51.978 "num_base_bdevs_discovered": 3, 00:13:51.978 "num_base_bdevs_operational": 3, 00:13:51.978 "base_bdevs_list": [ 00:13:51.978 { 00:13:51.978 "name": null, 00:13:51.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.978 "is_configured": false, 00:13:51.978 "data_offset": 0, 00:13:51.978 "data_size": 65536 00:13:51.978 }, 00:13:51.978 { 00:13:51.978 "name": "BaseBdev2", 00:13:51.978 "uuid": "1cb0e096-68bd-52d9-bc45-5b8ef65a094b", 00:13:51.978 "is_configured": true, 00:13:51.978 "data_offset": 0, 00:13:51.978 "data_size": 65536 00:13:51.978 }, 00:13:51.978 { 00:13:51.978 "name": "BaseBdev3", 00:13:51.978 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:51.978 "is_configured": true, 00:13:51.978 "data_offset": 0, 00:13:51.978 "data_size": 65536 00:13:51.978 }, 00:13:51.978 { 00:13:51.978 "name": "BaseBdev4", 00:13:51.978 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:51.978 "is_configured": true, 00:13:51.978 "data_offset": 0, 00:13:51.978 "data_size": 65536 00:13:51.978 } 00:13:51.978 ] 00:13:51.978 }' 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.978 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.498 153.00 IOPS, 459.00 MiB/s [2024-11-26T21:21:10.654Z] 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.498 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.498 "name": "raid_bdev1", 00:13:52.498 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:52.498 "strip_size_kb": 0, 00:13:52.498 "state": "online", 00:13:52.498 "raid_level": "raid1", 00:13:52.498 "superblock": false, 00:13:52.498 "num_base_bdevs": 4, 00:13:52.498 "num_base_bdevs_discovered": 3, 00:13:52.498 "num_base_bdevs_operational": 3, 00:13:52.498 "base_bdevs_list": [ 00:13:52.498 { 00:13:52.498 "name": null, 00:13:52.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.498 "is_configured": false, 00:13:52.498 "data_offset": 0, 00:13:52.498 "data_size": 65536 00:13:52.498 }, 00:13:52.498 { 00:13:52.498 "name": "BaseBdev2", 00:13:52.498 "uuid": "1cb0e096-68bd-52d9-bc45-5b8ef65a094b", 00:13:52.498 "is_configured": true, 00:13:52.498 "data_offset": 0, 00:13:52.498 "data_size": 65536 00:13:52.498 }, 00:13:52.498 { 00:13:52.498 "name": "BaseBdev3", 00:13:52.498 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:52.498 "is_configured": true, 00:13:52.498 "data_offset": 0, 00:13:52.498 "data_size": 65536 00:13:52.498 }, 00:13:52.498 { 00:13:52.498 "name": "BaseBdev4", 00:13:52.498 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:52.498 "is_configured": true, 00:13:52.498 "data_offset": 0, 00:13:52.498 "data_size": 65536 00:13:52.498 } 00:13:52.498 ] 00:13:52.498 }' 00:13:52.499 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.499 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.499 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.759 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.759 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.759 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.759 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.759 [2024-11-26 21:21:10.678184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.759 21:21:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.759 21:21:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:52.759 [2024-11-26 21:21:10.763708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:52.759 [2024-11-26 21:21:10.766183] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.759 [2024-11-26 21:21:10.872073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:52.759 [2024-11-26 21:21:10.872808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.019 [2024-11-26 21:21:11.027714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.849 169.00 IOPS, 507.00 MiB/s [2024-11-26T21:21:12.005Z] 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.849 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.849 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.849 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.850 "name": "raid_bdev1", 00:13:53.850 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:53.850 "strip_size_kb": 0, 00:13:53.850 "state": "online", 00:13:53.850 "raid_level": "raid1", 00:13:53.850 "superblock": false, 00:13:53.850 "num_base_bdevs": 4, 00:13:53.850 "num_base_bdevs_discovered": 4, 00:13:53.850 "num_base_bdevs_operational": 4, 00:13:53.850 "process": { 00:13:53.850 "type": "rebuild", 00:13:53.850 "target": "spare", 00:13:53.850 "progress": { 00:13:53.850 "blocks": 12288, 00:13:53.850 "percent": 18 00:13:53.850 } 00:13:53.850 }, 00:13:53.850 "base_bdevs_list": [ 00:13:53.850 { 00:13:53.850 "name": "spare", 00:13:53.850 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:53.850 "is_configured": true, 00:13:53.850 "data_offset": 0, 00:13:53.850 "data_size": 65536 00:13:53.850 }, 00:13:53.850 { 00:13:53.850 "name": "BaseBdev2", 00:13:53.850 "uuid": "1cb0e096-68bd-52d9-bc45-5b8ef65a094b", 00:13:53.850 "is_configured": true, 00:13:53.850 "data_offset": 0, 00:13:53.850 "data_size": 65536 00:13:53.850 }, 00:13:53.850 { 00:13:53.850 "name": "BaseBdev3", 00:13:53.850 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:53.850 "is_configured": true, 00:13:53.850 "data_offset": 0, 00:13:53.850 "data_size": 65536 00:13:53.850 }, 00:13:53.850 { 00:13:53.850 "name": "BaseBdev4", 00:13:53.850 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:53.850 "is_configured": true, 00:13:53.850 "data_offset": 0, 00:13:53.850 "data_size": 65536 00:13:53.850 } 00:13:53.850 ] 00:13:53.850 }' 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.850 [2024-11-26 21:21:11.795348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:53.850 [2024-11-26 21:21:11.797796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.850 21:21:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.850 [2024-11-26 21:21:11.848918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.111 [2024-11-26 21:21:12.022936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:54.111 [2024-11-26 21:21:12.054878] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:54.111 [2024-11-26 21:21:12.055005] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.111 [2024-11-26 21:21:12.066273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.111 "name": "raid_bdev1", 00:13:54.111 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:54.111 "strip_size_kb": 0, 00:13:54.111 "state": "online", 00:13:54.111 "raid_level": "raid1", 00:13:54.111 "superblock": false, 00:13:54.111 "num_base_bdevs": 4, 00:13:54.111 "num_base_bdevs_discovered": 3, 00:13:54.111 "num_base_bdevs_operational": 3, 00:13:54.111 "process": { 00:13:54.111 "type": "rebuild", 00:13:54.111 "target": "spare", 00:13:54.111 "progress": { 00:13:54.111 "blocks": 16384, 00:13:54.111 "percent": 25 00:13:54.111 } 00:13:54.111 }, 00:13:54.111 "base_bdevs_list": [ 00:13:54.111 { 00:13:54.111 "name": "spare", 00:13:54.111 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:54.111 "is_configured": true, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 }, 00:13:54.111 { 00:13:54.111 "name": null, 00:13:54.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.111 "is_configured": false, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 }, 00:13:54.111 { 00:13:54.111 "name": "BaseBdev3", 00:13:54.111 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:54.111 "is_configured": true, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 }, 00:13:54.111 { 00:13:54.111 "name": "BaseBdev4", 00:13:54.111 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:54.111 "is_configured": true, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 } 00:13:54.111 ] 00:13:54.111 }' 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.111 "name": "raid_bdev1", 00:13:54.111 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:54.111 "strip_size_kb": 0, 00:13:54.111 "state": "online", 00:13:54.111 "raid_level": "raid1", 00:13:54.111 "superblock": false, 00:13:54.111 "num_base_bdevs": 4, 00:13:54.111 "num_base_bdevs_discovered": 3, 00:13:54.111 "num_base_bdevs_operational": 3, 00:13:54.111 "process": { 00:13:54.111 "type": "rebuild", 00:13:54.111 "target": "spare", 00:13:54.111 "progress": { 00:13:54.111 "blocks": 16384, 00:13:54.111 "percent": 25 00:13:54.111 } 00:13:54.111 }, 00:13:54.111 "base_bdevs_list": [ 00:13:54.111 { 00:13:54.111 "name": "spare", 00:13:54.111 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:54.111 "is_configured": true, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 }, 00:13:54.111 { 00:13:54.111 "name": null, 00:13:54.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.111 "is_configured": false, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 }, 00:13:54.111 { 00:13:54.111 "name": "BaseBdev3", 00:13:54.111 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:54.111 "is_configured": true, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 }, 00:13:54.111 { 00:13:54.111 "name": "BaseBdev4", 00:13:54.111 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:54.111 "is_configured": true, 00:13:54.111 "data_offset": 0, 00:13:54.111 "data_size": 65536 00:13:54.111 } 00:13:54.111 ] 00:13:54.111 }' 00:13:54.111 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.379 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.379 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.379 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.379 21:21:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.379 141.75 IOPS, 425.25 MiB/s [2024-11-26T21:21:12.535Z] [2024-11-26 21:21:12.409964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:54.640 [2024-11-26 21:21:12.656032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:54.640 [2024-11-26 21:21:12.657025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:54.900 [2024-11-26 21:21:12.982343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:55.161 [2024-11-26 21:21:13.227209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.421 124.40 IOPS, 373.20 MiB/s [2024-11-26T21:21:13.577Z] 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.421 "name": "raid_bdev1", 00:13:55.421 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:55.421 "strip_size_kb": 0, 00:13:55.421 "state": "online", 00:13:55.421 "raid_level": "raid1", 00:13:55.421 "superblock": false, 00:13:55.421 "num_base_bdevs": 4, 00:13:55.421 "num_base_bdevs_discovered": 3, 00:13:55.421 "num_base_bdevs_operational": 3, 00:13:55.421 "process": { 00:13:55.421 "type": "rebuild", 00:13:55.421 "target": "spare", 00:13:55.421 "progress": { 00:13:55.421 "blocks": 28672, 00:13:55.421 "percent": 43 00:13:55.421 } 00:13:55.421 }, 00:13:55.421 "base_bdevs_list": [ 00:13:55.421 { 00:13:55.421 "name": "spare", 00:13:55.421 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:55.421 "is_configured": true, 00:13:55.421 "data_offset": 0, 00:13:55.421 "data_size": 65536 00:13:55.421 }, 00:13:55.421 { 00:13:55.421 "name": null, 00:13:55.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.421 "is_configured": false, 00:13:55.421 "data_offset": 0, 00:13:55.421 "data_size": 65536 00:13:55.421 }, 00:13:55.421 { 00:13:55.421 "name": "BaseBdev3", 00:13:55.421 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:55.421 "is_configured": true, 00:13:55.421 "data_offset": 0, 00:13:55.421 "data_size": 65536 00:13:55.421 }, 00:13:55.421 { 00:13:55.421 "name": "BaseBdev4", 00:13:55.421 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:55.421 "is_configured": true, 00:13:55.421 "data_offset": 0, 00:13:55.421 "data_size": 65536 00:13:55.421 } 00:13:55.421 ] 00:13:55.421 }' 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.421 21:21:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.681 [2024-11-26 21:21:13.689552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:55.682 [2024-11-26 21:21:13.690099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:55.942 [2024-11-26 21:21:14.055174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:56.512 109.83 IOPS, 329.50 MiB/s [2024-11-26T21:21:14.668Z] 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.512 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.512 "name": "raid_bdev1", 00:13:56.512 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:56.512 "strip_size_kb": 0, 00:13:56.512 "state": "online", 00:13:56.512 "raid_level": "raid1", 00:13:56.512 "superblock": false, 00:13:56.512 "num_base_bdevs": 4, 00:13:56.512 "num_base_bdevs_discovered": 3, 00:13:56.512 "num_base_bdevs_operational": 3, 00:13:56.512 "process": { 00:13:56.512 "type": "rebuild", 00:13:56.512 "target": "spare", 00:13:56.512 "progress": { 00:13:56.512 "blocks": 43008, 00:13:56.512 "percent": 65 00:13:56.512 } 00:13:56.512 }, 00:13:56.512 "base_bdevs_list": [ 00:13:56.512 { 00:13:56.512 "name": "spare", 00:13:56.512 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:56.512 "is_configured": true, 00:13:56.512 "data_offset": 0, 00:13:56.512 "data_size": 65536 00:13:56.512 }, 00:13:56.512 { 00:13:56.512 "name": null, 00:13:56.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.512 "is_configured": false, 00:13:56.512 "data_offset": 0, 00:13:56.512 "data_size": 65536 00:13:56.512 }, 00:13:56.512 { 00:13:56.512 "name": "BaseBdev3", 00:13:56.512 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:56.512 "is_configured": true, 00:13:56.512 "data_offset": 0, 00:13:56.512 "data_size": 65536 00:13:56.512 }, 00:13:56.512 { 00:13:56.512 "name": "BaseBdev4", 00:13:56.512 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:56.512 "is_configured": true, 00:13:56.512 "data_offset": 0, 00:13:56.513 "data_size": 65536 00:13:56.513 } 00:13:56.513 ] 00:13:56.513 }' 00:13:56.513 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.513 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.513 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.513 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.513 21:21:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.513 [2024-11-26 21:21:14.650679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:57.082 [2024-11-26 21:21:14.988752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:57.353 [2024-11-26 21:21:15.319857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:57.616 101.57 IOPS, 304.71 MiB/s [2024-11-26T21:21:15.772Z] 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.616 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.616 "name": "raid_bdev1", 00:13:57.616 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:57.616 "strip_size_kb": 0, 00:13:57.616 "state": "online", 00:13:57.616 "raid_level": "raid1", 00:13:57.616 "superblock": false, 00:13:57.616 "num_base_bdevs": 4, 00:13:57.616 "num_base_bdevs_discovered": 3, 00:13:57.616 "num_base_bdevs_operational": 3, 00:13:57.616 "process": { 00:13:57.616 "type": "rebuild", 00:13:57.616 "target": "spare", 00:13:57.616 "progress": { 00:13:57.616 "blocks": 61440, 00:13:57.616 "percent": 93 00:13:57.616 } 00:13:57.616 }, 00:13:57.616 "base_bdevs_list": [ 00:13:57.616 { 00:13:57.616 "name": "spare", 00:13:57.616 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:57.616 "is_configured": true, 00:13:57.616 "data_offset": 0, 00:13:57.616 "data_size": 65536 00:13:57.616 }, 00:13:57.617 { 00:13:57.617 "name": null, 00:13:57.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.617 "is_configured": false, 00:13:57.617 "data_offset": 0, 00:13:57.617 "data_size": 65536 00:13:57.617 }, 00:13:57.617 { 00:13:57.617 "name": "BaseBdev3", 00:13:57.617 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:57.617 "is_configured": true, 00:13:57.617 "data_offset": 0, 00:13:57.617 "data_size": 65536 00:13:57.617 }, 00:13:57.617 { 00:13:57.617 "name": "BaseBdev4", 00:13:57.617 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:57.617 "is_configured": true, 00:13:57.617 "data_offset": 0, 00:13:57.617 "data_size": 65536 00:13:57.617 } 00:13:57.617 ] 00:13:57.617 }' 00:13:57.617 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.617 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.617 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.617 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.617 21:21:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.876 [2024-11-26 21:21:15.776960] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:57.876 [2024-11-26 21:21:15.881840] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:57.876 [2024-11-26 21:21:15.885486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.707 93.38 IOPS, 280.12 MiB/s [2024-11-26T21:21:16.863Z] 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.707 "name": "raid_bdev1", 00:13:58.707 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:58.707 "strip_size_kb": 0, 00:13:58.707 "state": "online", 00:13:58.707 "raid_level": "raid1", 00:13:58.707 "superblock": false, 00:13:58.707 "num_base_bdevs": 4, 00:13:58.707 "num_base_bdevs_discovered": 3, 00:13:58.707 "num_base_bdevs_operational": 3, 00:13:58.707 "base_bdevs_list": [ 00:13:58.707 { 00:13:58.707 "name": "spare", 00:13:58.707 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:58.707 "is_configured": true, 00:13:58.707 "data_offset": 0, 00:13:58.707 "data_size": 65536 00:13:58.707 }, 00:13:58.707 { 00:13:58.707 "name": null, 00:13:58.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.707 "is_configured": false, 00:13:58.707 "data_offset": 0, 00:13:58.707 "data_size": 65536 00:13:58.707 }, 00:13:58.707 { 00:13:58.707 "name": "BaseBdev3", 00:13:58.707 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:58.707 "is_configured": true, 00:13:58.707 "data_offset": 0, 00:13:58.707 "data_size": 65536 00:13:58.707 }, 00:13:58.707 { 00:13:58.707 "name": "BaseBdev4", 00:13:58.707 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:58.707 "is_configured": true, 00:13:58.707 "data_offset": 0, 00:13:58.707 "data_size": 65536 00:13:58.707 } 00:13:58.707 ] 00:13:58.707 }' 00:13:58.707 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.968 "name": "raid_bdev1", 00:13:58.968 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:58.968 "strip_size_kb": 0, 00:13:58.968 "state": "online", 00:13:58.968 "raid_level": "raid1", 00:13:58.968 "superblock": false, 00:13:58.968 "num_base_bdevs": 4, 00:13:58.968 "num_base_bdevs_discovered": 3, 00:13:58.968 "num_base_bdevs_operational": 3, 00:13:58.968 "base_bdevs_list": [ 00:13:58.968 { 00:13:58.968 "name": "spare", 00:13:58.968 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:58.968 "is_configured": true, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 }, 00:13:58.968 { 00:13:58.968 "name": null, 00:13:58.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.968 "is_configured": false, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 }, 00:13:58.968 { 00:13:58.968 "name": "BaseBdev3", 00:13:58.968 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:58.968 "is_configured": true, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 }, 00:13:58.968 { 00:13:58.968 "name": "BaseBdev4", 00:13:58.968 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:58.968 "is_configured": true, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 } 00:13:58.968 ] 00:13:58.968 }' 00:13:58.968 21:21:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.968 "name": "raid_bdev1", 00:13:58.968 "uuid": "521fc648-0c89-4eb0-9682-f28f0e1d6776", 00:13:58.968 "strip_size_kb": 0, 00:13:58.968 "state": "online", 00:13:58.968 "raid_level": "raid1", 00:13:58.968 "superblock": false, 00:13:58.968 "num_base_bdevs": 4, 00:13:58.968 "num_base_bdevs_discovered": 3, 00:13:58.968 "num_base_bdevs_operational": 3, 00:13:58.968 "base_bdevs_list": [ 00:13:58.968 { 00:13:58.968 "name": "spare", 00:13:58.968 "uuid": "579069a1-e5bf-507c-99ff-da6160fe826e", 00:13:58.968 "is_configured": true, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 }, 00:13:58.968 { 00:13:58.968 "name": null, 00:13:58.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.968 "is_configured": false, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 }, 00:13:58.968 { 00:13:58.968 "name": "BaseBdev3", 00:13:58.968 "uuid": "b4d0d2c2-a039-5d0c-b526-1951c4b0c62c", 00:13:58.968 "is_configured": true, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 }, 00:13:58.968 { 00:13:58.968 "name": "BaseBdev4", 00:13:58.968 "uuid": "ab69a126-0469-5cd0-9c14-80ffc7a809da", 00:13:58.968 "is_configured": true, 00:13:58.968 "data_offset": 0, 00:13:58.968 "data_size": 65536 00:13:58.968 } 00:13:58.968 ] 00:13:58.968 }' 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.968 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.488 87.00 IOPS, 261.00 MiB/s [2024-11-26T21:21:17.644Z] 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.488 [2024-11-26 21:21:17.501609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.488 [2024-11-26 21:21:17.501723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.488 00:13:59.488 Latency(us) 00:13:59.488 [2024-11-26T21:21:17.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.488 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:59.488 raid_bdev1 : 9.27 85.01 255.02 0.00 0.00 18238.16 300.49 121799.66 00:13:59.488 [2024-11-26T21:21:17.644Z] =================================================================================================================== 00:13:59.488 [2024-11-26T21:21:17.644Z] Total : 85.01 255.02 0.00 0.00 18238.16 300.49 121799.66 00:13:59.488 [2024-11-26 21:21:17.630336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.488 [2024-11-26 21:21:17.630450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.488 [2024-11-26 21:21:17.630565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.488 [2024-11-26 21:21:17.630612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:59.488 { 00:13:59.488 "results": [ 00:13:59.488 { 00:13:59.488 "job": "raid_bdev1", 00:13:59.488 "core_mask": "0x1", 00:13:59.488 "workload": "randrw", 00:13:59.488 "percentage": 50, 00:13:59.488 "status": "finished", 00:13:59.488 "queue_depth": 2, 00:13:59.488 "io_size": 3145728, 00:13:59.488 "runtime": 9.269883, 00:13:59.488 "iops": 85.0064666404096, 00:13:59.488 "mibps": 255.0193999212288, 00:13:59.488 "io_failed": 0, 00:13:59.488 "io_timeout": 0, 00:13:59.488 "avg_latency_us": 18238.158863298828, 00:13:59.488 "min_latency_us": 300.49257641921395, 00:13:59.488 "max_latency_us": 121799.6576419214 00:13:59.488 } 00:13:59.488 ], 00:13:59.488 "core_count": 1 00:13:59.488 } 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.488 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.748 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:59.748 /dev/nbd0 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.008 1+0 records in 00:14:00.008 1+0 records out 00:14:00.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263698 s, 15.5 MB/s 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.008 21:21:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:00.008 /dev/nbd1 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.008 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.268 1+0 records in 00:14:00.268 1+0 records out 00:14:00.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482457 s, 8.5 MB/s 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.268 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.528 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:00.789 /dev/nbd1 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.789 1+0 records in 00:14:00.789 1+0 records out 00:14:00.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365596 s, 11.2 MB/s 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.789 21:21:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.049 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78537 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78537 ']' 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78537 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78537 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78537' 00:14:01.308 killing process with pid 78537 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78537 00:14:01.308 Received shutdown signal, test time was about 11.037244 seconds 00:14:01.308 00:14:01.308 Latency(us) 00:14:01.308 [2024-11-26T21:21:19.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.308 [2024-11-26T21:21:19.464Z] =================================================================================================================== 00:14:01.308 [2024-11-26T21:21:19.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.308 [2024-11-26 21:21:19.373881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.308 21:21:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78537 00:14:01.877 [2024-11-26 21:21:19.819930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:03.293 00:14:03.293 real 0m14.592s 00:14:03.293 user 0m17.905s 00:14:03.293 sys 0m1.969s 00:14:03.293 ************************************ 00:14:03.293 END TEST raid_rebuild_test_io 00:14:03.293 ************************************ 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.293 21:21:21 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:03.293 21:21:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:03.293 21:21:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.293 21:21:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.293 ************************************ 00:14:03.293 START TEST raid_rebuild_test_sb_io 00:14:03.293 ************************************ 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.293 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78965 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78965 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78965 ']' 00:14:03.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.294 21:21:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.294 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:03.294 Zero copy mechanism will not be used. 00:14:03.294 [2024-11-26 21:21:21.246943] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:03.294 [2024-11-26 21:21:21.247083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78965 ] 00:14:03.294 [2024-11-26 21:21:21.417624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.560 [2024-11-26 21:21:21.550867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.820 [2024-11-26 21:21:21.783705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.820 [2024-11-26 21:21:21.783845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.079 BaseBdev1_malloc 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.079 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.079 [2024-11-26 21:21:22.116392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:04.079 [2024-11-26 21:21:22.116470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.080 [2024-11-26 21:21:22.116493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.080 [2024-11-26 21:21:22.116506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.080 [2024-11-26 21:21:22.118825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.080 [2024-11-26 21:21:22.118864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.080 BaseBdev1 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.080 BaseBdev2_malloc 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.080 [2024-11-26 21:21:22.176419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:04.080 [2024-11-26 21:21:22.176483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.080 [2024-11-26 21:21:22.176505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.080 [2024-11-26 21:21:22.176517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.080 [2024-11-26 21:21:22.178799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.080 [2024-11-26 21:21:22.178836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:04.080 BaseBdev2 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.080 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.339 BaseBdev3_malloc 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.339 [2024-11-26 21:21:22.250661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:04.339 [2024-11-26 21:21:22.250724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.339 [2024-11-26 21:21:22.250747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.339 [2024-11-26 21:21:22.250758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.339 [2024-11-26 21:21:22.253177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.339 [2024-11-26 21:21:22.253215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:04.339 BaseBdev3 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.339 BaseBdev4_malloc 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.339 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.339 [2024-11-26 21:21:22.311686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:04.339 [2024-11-26 21:21:22.311749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.339 [2024-11-26 21:21:22.311769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:04.339 [2024-11-26 21:21:22.311779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.340 [2024-11-26 21:21:22.314120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.340 [2024-11-26 21:21:22.314157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:04.340 BaseBdev4 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.340 spare_malloc 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.340 spare_delay 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.340 [2024-11-26 21:21:22.385124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.340 [2024-11-26 21:21:22.385178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.340 [2024-11-26 21:21:22.385195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:04.340 [2024-11-26 21:21:22.385206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.340 [2024-11-26 21:21:22.387442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.340 [2024-11-26 21:21:22.387480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.340 spare 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.340 [2024-11-26 21:21:22.397169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.340 [2024-11-26 21:21:22.399161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.340 [2024-11-26 21:21:22.399215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.340 [2024-11-26 21:21:22.399263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.340 [2024-11-26 21:21:22.399429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:04.340 [2024-11-26 21:21:22.399450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.340 [2024-11-26 21:21:22.399681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:04.340 [2024-11-26 21:21:22.399849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:04.340 [2024-11-26 21:21:22.399860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:04.340 [2024-11-26 21:21:22.400019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.340 "name": "raid_bdev1", 00:14:04.340 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:04.340 "strip_size_kb": 0, 00:14:04.340 "state": "online", 00:14:04.340 "raid_level": "raid1", 00:14:04.340 "superblock": true, 00:14:04.340 "num_base_bdevs": 4, 00:14:04.340 "num_base_bdevs_discovered": 4, 00:14:04.340 "num_base_bdevs_operational": 4, 00:14:04.340 "base_bdevs_list": [ 00:14:04.340 { 00:14:04.340 "name": "BaseBdev1", 00:14:04.340 "uuid": "cf9e7b87-f56c-5d6c-9c54-1e7e5b66e8cc", 00:14:04.340 "is_configured": true, 00:14:04.340 "data_offset": 2048, 00:14:04.340 "data_size": 63488 00:14:04.340 }, 00:14:04.340 { 00:14:04.340 "name": "BaseBdev2", 00:14:04.340 "uuid": "c6ab2cc3-0eea-5a4a-a1bb-75de44e98fbc", 00:14:04.340 "is_configured": true, 00:14:04.340 "data_offset": 2048, 00:14:04.340 "data_size": 63488 00:14:04.340 }, 00:14:04.340 { 00:14:04.340 "name": "BaseBdev3", 00:14:04.340 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:04.340 "is_configured": true, 00:14:04.340 "data_offset": 2048, 00:14:04.340 "data_size": 63488 00:14:04.340 }, 00:14:04.340 { 00:14:04.340 "name": "BaseBdev4", 00:14:04.340 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:04.340 "is_configured": true, 00:14:04.340 "data_offset": 2048, 00:14:04.340 "data_size": 63488 00:14:04.340 } 00:14:04.340 ] 00:14:04.340 }' 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.340 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:04.909 [2024-11-26 21:21:22.864805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.909 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.910 [2024-11-26 21:21:22.952256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.910 21:21:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.910 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.910 "name": "raid_bdev1", 00:14:04.910 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:04.910 "strip_size_kb": 0, 00:14:04.910 "state": "online", 00:14:04.910 "raid_level": "raid1", 00:14:04.910 "superblock": true, 00:14:04.910 "num_base_bdevs": 4, 00:14:04.910 "num_base_bdevs_discovered": 3, 00:14:04.910 "num_base_bdevs_operational": 3, 00:14:04.910 "base_bdevs_list": [ 00:14:04.910 { 00:14:04.910 "name": null, 00:14:04.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.910 "is_configured": false, 00:14:04.910 "data_offset": 0, 00:14:04.910 "data_size": 63488 00:14:04.910 }, 00:14:04.910 { 00:14:04.910 "name": "BaseBdev2", 00:14:04.910 "uuid": "c6ab2cc3-0eea-5a4a-a1bb-75de44e98fbc", 00:14:04.910 "is_configured": true, 00:14:04.910 "data_offset": 2048, 00:14:04.910 "data_size": 63488 00:14:04.910 }, 00:14:04.910 { 00:14:04.910 "name": "BaseBdev3", 00:14:04.910 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:04.910 "is_configured": true, 00:14:04.910 "data_offset": 2048, 00:14:04.910 "data_size": 63488 00:14:04.910 }, 00:14:04.910 { 00:14:04.910 "name": "BaseBdev4", 00:14:04.910 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:04.910 "is_configured": true, 00:14:04.910 "data_offset": 2048, 00:14:04.910 "data_size": 63488 00:14:04.910 } 00:14:04.910 ] 00:14:04.910 }' 00:14:04.910 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.910 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.910 [2024-11-26 21:21:23.045682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:04.910 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:04.910 Zero copy mechanism will not be used. 00:14:04.910 Running I/O for 60 seconds... 00:14:05.480 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.480 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.480 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.480 [2024-11-26 21:21:23.344185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.480 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.480 21:21:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:05.480 [2024-11-26 21:21:23.420223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:05.480 [2024-11-26 21:21:23.422578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.480 [2024-11-26 21:21:23.532068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.480 [2024-11-26 21:21:23.533145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.740 [2024-11-26 21:21:23.655133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.740 [2024-11-26 21:21:23.656297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.999 [2024-11-26 21:21:23.990905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:06.259 138.00 IOPS, 414.00 MiB/s [2024-11-26T21:21:24.415Z] [2024-11-26 21:21:24.216875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:06.259 [2024-11-26 21:21:24.218107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.259 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.518 "name": "raid_bdev1", 00:14:06.518 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:06.518 "strip_size_kb": 0, 00:14:06.518 "state": "online", 00:14:06.518 "raid_level": "raid1", 00:14:06.518 "superblock": true, 00:14:06.518 "num_base_bdevs": 4, 00:14:06.518 "num_base_bdevs_discovered": 4, 00:14:06.518 "num_base_bdevs_operational": 4, 00:14:06.518 "process": { 00:14:06.518 "type": "rebuild", 00:14:06.518 "target": "spare", 00:14:06.518 "progress": { 00:14:06.518 "blocks": 10240, 00:14:06.518 "percent": 16 00:14:06.518 } 00:14:06.518 }, 00:14:06.518 "base_bdevs_list": [ 00:14:06.518 { 00:14:06.518 "name": "spare", 00:14:06.518 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:06.518 "is_configured": true, 00:14:06.518 "data_offset": 2048, 00:14:06.518 "data_size": 63488 00:14:06.518 }, 00:14:06.518 { 00:14:06.518 "name": "BaseBdev2", 00:14:06.518 "uuid": "c6ab2cc3-0eea-5a4a-a1bb-75de44e98fbc", 00:14:06.518 "is_configured": true, 00:14:06.518 "data_offset": 2048, 00:14:06.518 "data_size": 63488 00:14:06.518 }, 00:14:06.518 { 00:14:06.518 "name": "BaseBdev3", 00:14:06.518 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:06.518 "is_configured": true, 00:14:06.518 "data_offset": 2048, 00:14:06.518 "data_size": 63488 00:14:06.518 }, 00:14:06.518 { 00:14:06.518 "name": "BaseBdev4", 00:14:06.518 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:06.518 "is_configured": true, 00:14:06.518 "data_offset": 2048, 00:14:06.518 "data_size": 63488 00:14:06.518 } 00:14:06.518 ] 00:14:06.518 }' 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.518 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.518 [2024-11-26 21:21:24.552734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.518 [2024-11-26 21:21:24.564572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:06.518 [2024-11-26 21:21:24.566779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:06.778 [2024-11-26 21:21:24.681537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.778 [2024-11-26 21:21:24.697475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.778 [2024-11-26 21:21:24.697680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.778 [2024-11-26 21:21:24.697715] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.778 [2024-11-26 21:21:24.734801] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.778 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.779 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.779 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.779 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.779 "name": "raid_bdev1", 00:14:06.779 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:06.779 "strip_size_kb": 0, 00:14:06.779 "state": "online", 00:14:06.779 "raid_level": "raid1", 00:14:06.779 "superblock": true, 00:14:06.779 "num_base_bdevs": 4, 00:14:06.779 "num_base_bdevs_discovered": 3, 00:14:06.779 "num_base_bdevs_operational": 3, 00:14:06.779 "base_bdevs_list": [ 00:14:06.779 { 00:14:06.779 "name": null, 00:14:06.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.779 "is_configured": false, 00:14:06.779 "data_offset": 0, 00:14:06.779 "data_size": 63488 00:14:06.779 }, 00:14:06.779 { 00:14:06.779 "name": "BaseBdev2", 00:14:06.779 "uuid": "c6ab2cc3-0eea-5a4a-a1bb-75de44e98fbc", 00:14:06.779 "is_configured": true, 00:14:06.779 "data_offset": 2048, 00:14:06.779 "data_size": 63488 00:14:06.779 }, 00:14:06.779 { 00:14:06.779 "name": "BaseBdev3", 00:14:06.779 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:06.779 "is_configured": true, 00:14:06.779 "data_offset": 2048, 00:14:06.779 "data_size": 63488 00:14:06.779 }, 00:14:06.779 { 00:14:06.779 "name": "BaseBdev4", 00:14:06.779 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:06.779 "is_configured": true, 00:14:06.779 "data_offset": 2048, 00:14:06.779 "data_size": 63488 00:14:06.779 } 00:14:06.779 ] 00:14:06.779 }' 00:14:06.779 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.779 21:21:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.300 123.00 IOPS, 369.00 MiB/s [2024-11-26T21:21:25.456Z] 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.300 "name": "raid_bdev1", 00:14:07.300 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:07.300 "strip_size_kb": 0, 00:14:07.300 "state": "online", 00:14:07.300 "raid_level": "raid1", 00:14:07.300 "superblock": true, 00:14:07.300 "num_base_bdevs": 4, 00:14:07.300 "num_base_bdevs_discovered": 3, 00:14:07.300 "num_base_bdevs_operational": 3, 00:14:07.300 "base_bdevs_list": [ 00:14:07.300 { 00:14:07.300 "name": null, 00:14:07.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.300 "is_configured": false, 00:14:07.300 "data_offset": 0, 00:14:07.300 "data_size": 63488 00:14:07.300 }, 00:14:07.300 { 00:14:07.300 "name": "BaseBdev2", 00:14:07.300 "uuid": "c6ab2cc3-0eea-5a4a-a1bb-75de44e98fbc", 00:14:07.300 "is_configured": true, 00:14:07.300 "data_offset": 2048, 00:14:07.300 "data_size": 63488 00:14:07.300 }, 00:14:07.300 { 00:14:07.300 "name": "BaseBdev3", 00:14:07.300 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:07.300 "is_configured": true, 00:14:07.300 "data_offset": 2048, 00:14:07.300 "data_size": 63488 00:14:07.300 }, 00:14:07.300 { 00:14:07.300 "name": "BaseBdev4", 00:14:07.300 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:07.300 "is_configured": true, 00:14:07.300 "data_offset": 2048, 00:14:07.300 "data_size": 63488 00:14:07.300 } 00:14:07.300 ] 00:14:07.300 }' 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.300 [2024-11-26 21:21:25.364842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.300 21:21:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:07.560 [2024-11-26 21:21:25.461660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:07.560 [2024-11-26 21:21:25.463932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.560 [2024-11-26 21:21:25.582542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.560 [2024-11-26 21:21:25.583223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.560 [2024-11-26 21:21:25.710632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:07.560 [2024-11-26 21:21:25.711285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.129 134.00 IOPS, 402.00 MiB/s [2024-11-26T21:21:26.286Z] [2024-11-26 21:21:26.059399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:08.390 [2024-11-26 21:21:26.287753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:08.390 [2024-11-26 21:21:26.289176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.390 "name": "raid_bdev1", 00:14:08.390 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:08.390 "strip_size_kb": 0, 00:14:08.390 "state": "online", 00:14:08.390 "raid_level": "raid1", 00:14:08.390 "superblock": true, 00:14:08.390 "num_base_bdevs": 4, 00:14:08.390 "num_base_bdevs_discovered": 4, 00:14:08.390 "num_base_bdevs_operational": 4, 00:14:08.390 "process": { 00:14:08.390 "type": "rebuild", 00:14:08.390 "target": "spare", 00:14:08.390 "progress": { 00:14:08.390 "blocks": 10240, 00:14:08.390 "percent": 16 00:14:08.390 } 00:14:08.390 }, 00:14:08.390 "base_bdevs_list": [ 00:14:08.390 { 00:14:08.390 "name": "spare", 00:14:08.390 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:08.390 "is_configured": true, 00:14:08.390 "data_offset": 2048, 00:14:08.390 "data_size": 63488 00:14:08.390 }, 00:14:08.390 { 00:14:08.390 "name": "BaseBdev2", 00:14:08.390 "uuid": "c6ab2cc3-0eea-5a4a-a1bb-75de44e98fbc", 00:14:08.390 "is_configured": true, 00:14:08.390 "data_offset": 2048, 00:14:08.390 "data_size": 63488 00:14:08.390 }, 00:14:08.390 { 00:14:08.390 "name": "BaseBdev3", 00:14:08.390 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:08.390 "is_configured": true, 00:14:08.390 "data_offset": 2048, 00:14:08.390 "data_size": 63488 00:14:08.390 }, 00:14:08.390 { 00:14:08.390 "name": "BaseBdev4", 00:14:08.390 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:08.390 "is_configured": true, 00:14:08.390 "data_offset": 2048, 00:14:08.390 "data_size": 63488 00:14:08.390 } 00:14:08.390 ] 00:14:08.390 }' 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.390 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:08.650 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.650 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.650 [2024-11-26 21:21:26.598171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.650 [2024-11-26 21:21:26.619598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:08.910 [2024-11-26 21:21:26.824377] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:08.910 [2024-11-26 21:21:26.824512] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.910 "name": "raid_bdev1", 00:14:08.910 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:08.910 "strip_size_kb": 0, 00:14:08.910 "state": "online", 00:14:08.910 "raid_level": "raid1", 00:14:08.910 "superblock": true, 00:14:08.910 "num_base_bdevs": 4, 00:14:08.910 "num_base_bdevs_discovered": 3, 00:14:08.910 "num_base_bdevs_operational": 3, 00:14:08.910 "process": { 00:14:08.910 "type": "rebuild", 00:14:08.910 "target": "spare", 00:14:08.910 "progress": { 00:14:08.910 "blocks": 14336, 00:14:08.910 "percent": 22 00:14:08.910 } 00:14:08.910 }, 00:14:08.910 "base_bdevs_list": [ 00:14:08.910 { 00:14:08.910 "name": "spare", 00:14:08.910 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:08.910 "is_configured": true, 00:14:08.910 "data_offset": 2048, 00:14:08.910 "data_size": 63488 00:14:08.910 }, 00:14:08.910 { 00:14:08.910 "name": null, 00:14:08.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.910 "is_configured": false, 00:14:08.910 "data_offset": 0, 00:14:08.910 "data_size": 63488 00:14:08.910 }, 00:14:08.910 { 00:14:08.910 "name": "BaseBdev3", 00:14:08.910 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:08.910 "is_configured": true, 00:14:08.910 "data_offset": 2048, 00:14:08.910 "data_size": 63488 00:14:08.910 }, 00:14:08.910 { 00:14:08.910 "name": "BaseBdev4", 00:14:08.910 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:08.910 "is_configured": true, 00:14:08.910 "data_offset": 2048, 00:14:08.910 "data_size": 63488 00:14:08.910 } 00:14:08.910 ] 00:14:08.910 }' 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.910 [2024-11-26 21:21:26.945443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:08.910 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.911 21:21:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.911 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.911 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.911 "name": "raid_bdev1", 00:14:08.911 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:08.911 "strip_size_kb": 0, 00:14:08.911 "state": "online", 00:14:08.911 "raid_level": "raid1", 00:14:08.911 "superblock": true, 00:14:08.911 "num_base_bdevs": 4, 00:14:08.911 "num_base_bdevs_discovered": 3, 00:14:08.911 "num_base_bdevs_operational": 3, 00:14:08.911 "process": { 00:14:08.911 "type": "rebuild", 00:14:08.911 "target": "spare", 00:14:08.911 "progress": { 00:14:08.911 "blocks": 16384, 00:14:08.911 "percent": 25 00:14:08.911 } 00:14:08.911 }, 00:14:08.911 "base_bdevs_list": [ 00:14:08.911 { 00:14:08.911 "name": "spare", 00:14:08.911 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:08.911 "is_configured": true, 00:14:08.911 "data_offset": 2048, 00:14:08.911 "data_size": 63488 00:14:08.911 }, 00:14:08.911 { 00:14:08.911 "name": null, 00:14:08.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.911 "is_configured": false, 00:14:08.911 "data_offset": 0, 00:14:08.911 "data_size": 63488 00:14:08.911 }, 00:14:08.911 { 00:14:08.911 "name": "BaseBdev3", 00:14:08.911 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:08.911 "is_configured": true, 00:14:08.911 "data_offset": 2048, 00:14:08.911 "data_size": 63488 00:14:08.911 }, 00:14:08.911 { 00:14:08.911 "name": "BaseBdev4", 00:14:08.911 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:08.911 "is_configured": true, 00:14:08.911 "data_offset": 2048, 00:14:08.911 "data_size": 63488 00:14:08.911 } 00:14:08.911 ] 00:14:08.911 }' 00:14:08.911 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.171 109.75 IOPS, 329.25 MiB/s [2024-11-26T21:21:27.327Z] 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.171 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.171 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.171 21:21:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.430 [2024-11-26 21:21:27.393018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:09.690 [2024-11-26 21:21:27.759193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:10.210 98.00 IOPS, 294.00 MiB/s [2024-11-26T21:21:28.366Z] 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.210 "name": "raid_bdev1", 00:14:10.210 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:10.210 "strip_size_kb": 0, 00:14:10.210 "state": "online", 00:14:10.210 "raid_level": "raid1", 00:14:10.210 "superblock": true, 00:14:10.210 "num_base_bdevs": 4, 00:14:10.210 "num_base_bdevs_discovered": 3, 00:14:10.210 "num_base_bdevs_operational": 3, 00:14:10.210 "process": { 00:14:10.210 "type": "rebuild", 00:14:10.210 "target": "spare", 00:14:10.210 "progress": { 00:14:10.210 "blocks": 32768, 00:14:10.210 "percent": 51 00:14:10.210 } 00:14:10.210 }, 00:14:10.210 "base_bdevs_list": [ 00:14:10.210 { 00:14:10.210 "name": "spare", 00:14:10.210 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:10.210 "is_configured": true, 00:14:10.210 "data_offset": 2048, 00:14:10.210 "data_size": 63488 00:14:10.210 }, 00:14:10.210 { 00:14:10.210 "name": null, 00:14:10.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.210 "is_configured": false, 00:14:10.210 "data_offset": 0, 00:14:10.210 "data_size": 63488 00:14:10.210 }, 00:14:10.210 { 00:14:10.210 "name": "BaseBdev3", 00:14:10.210 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:10.210 "is_configured": true, 00:14:10.210 "data_offset": 2048, 00:14:10.210 "data_size": 63488 00:14:10.210 }, 00:14:10.210 { 00:14:10.210 "name": "BaseBdev4", 00:14:10.210 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:10.210 "is_configured": true, 00:14:10.210 "data_offset": 2048, 00:14:10.210 "data_size": 63488 00:14:10.210 } 00:14:10.210 ] 00:14:10.210 }' 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.210 21:21:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.780 [2024-11-26 21:21:28.918897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:11.299 87.00 IOPS, 261.00 MiB/s [2024-11-26T21:21:29.455Z] [2024-11-26 21:21:29.273986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.299 "name": "raid_bdev1", 00:14:11.299 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:11.299 "strip_size_kb": 0, 00:14:11.299 "state": "online", 00:14:11.299 "raid_level": "raid1", 00:14:11.299 "superblock": true, 00:14:11.299 "num_base_bdevs": 4, 00:14:11.299 "num_base_bdevs_discovered": 3, 00:14:11.299 "num_base_bdevs_operational": 3, 00:14:11.299 "process": { 00:14:11.299 "type": "rebuild", 00:14:11.299 "target": "spare", 00:14:11.299 "progress": { 00:14:11.299 "blocks": 51200, 00:14:11.299 "percent": 80 00:14:11.299 } 00:14:11.299 }, 00:14:11.299 "base_bdevs_list": [ 00:14:11.299 { 00:14:11.299 "name": "spare", 00:14:11.299 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:11.299 "is_configured": true, 00:14:11.299 "data_offset": 2048, 00:14:11.299 "data_size": 63488 00:14:11.299 }, 00:14:11.299 { 00:14:11.299 "name": null, 00:14:11.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.299 "is_configured": false, 00:14:11.299 "data_offset": 0, 00:14:11.299 "data_size": 63488 00:14:11.299 }, 00:14:11.299 { 00:14:11.299 "name": "BaseBdev3", 00:14:11.299 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:11.299 "is_configured": true, 00:14:11.299 "data_offset": 2048, 00:14:11.299 "data_size": 63488 00:14:11.299 }, 00:14:11.299 { 00:14:11.299 "name": "BaseBdev4", 00:14:11.299 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:11.299 "is_configured": true, 00:14:11.299 "data_offset": 2048, 00:14:11.299 "data_size": 63488 00:14:11.299 } 00:14:11.299 ] 00:14:11.299 }' 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.299 21:21:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.559 [2024-11-26 21:21:29.485531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:12.129 [2024-11-26 21:21:30.034208] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:12.129 80.14 IOPS, 240.43 MiB/s [2024-11-26T21:21:30.285Z] [2024-11-26 21:21:30.134026] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:12.129 [2024-11-26 21:21:30.143418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.389 "name": "raid_bdev1", 00:14:12.389 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:12.389 "strip_size_kb": 0, 00:14:12.389 "state": "online", 00:14:12.389 "raid_level": "raid1", 00:14:12.389 "superblock": true, 00:14:12.389 "num_base_bdevs": 4, 00:14:12.389 "num_base_bdevs_discovered": 3, 00:14:12.389 "num_base_bdevs_operational": 3, 00:14:12.389 "base_bdevs_list": [ 00:14:12.389 { 00:14:12.389 "name": "spare", 00:14:12.389 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:12.389 "is_configured": true, 00:14:12.389 "data_offset": 2048, 00:14:12.389 "data_size": 63488 00:14:12.389 }, 00:14:12.389 { 00:14:12.389 "name": null, 00:14:12.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.389 "is_configured": false, 00:14:12.389 "data_offset": 0, 00:14:12.389 "data_size": 63488 00:14:12.389 }, 00:14:12.389 { 00:14:12.389 "name": "BaseBdev3", 00:14:12.389 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:12.389 "is_configured": true, 00:14:12.389 "data_offset": 2048, 00:14:12.389 "data_size": 63488 00:14:12.389 }, 00:14:12.389 { 00:14:12.389 "name": "BaseBdev4", 00:14:12.389 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:12.389 "is_configured": true, 00:14:12.389 "data_offset": 2048, 00:14:12.389 "data_size": 63488 00:14:12.389 } 00:14:12.389 ] 00:14:12.389 }' 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.389 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.649 "name": "raid_bdev1", 00:14:12.649 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:12.649 "strip_size_kb": 0, 00:14:12.649 "state": "online", 00:14:12.649 "raid_level": "raid1", 00:14:12.649 "superblock": true, 00:14:12.649 "num_base_bdevs": 4, 00:14:12.649 "num_base_bdevs_discovered": 3, 00:14:12.649 "num_base_bdevs_operational": 3, 00:14:12.649 "base_bdevs_list": [ 00:14:12.649 { 00:14:12.649 "name": "spare", 00:14:12.649 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:12.649 "is_configured": true, 00:14:12.649 "data_offset": 2048, 00:14:12.649 "data_size": 63488 00:14:12.649 }, 00:14:12.649 { 00:14:12.649 "name": null, 00:14:12.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.649 "is_configured": false, 00:14:12.649 "data_offset": 0, 00:14:12.649 "data_size": 63488 00:14:12.649 }, 00:14:12.649 { 00:14:12.649 "name": "BaseBdev3", 00:14:12.649 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:12.649 "is_configured": true, 00:14:12.649 "data_offset": 2048, 00:14:12.649 "data_size": 63488 00:14:12.649 }, 00:14:12.649 { 00:14:12.649 "name": "BaseBdev4", 00:14:12.649 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:12.649 "is_configured": true, 00:14:12.649 "data_offset": 2048, 00:14:12.649 "data_size": 63488 00:14:12.649 } 00:14:12.649 ] 00:14:12.649 }' 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.649 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.650 "name": "raid_bdev1", 00:14:12.650 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:12.650 "strip_size_kb": 0, 00:14:12.650 "state": "online", 00:14:12.650 "raid_level": "raid1", 00:14:12.650 "superblock": true, 00:14:12.650 "num_base_bdevs": 4, 00:14:12.650 "num_base_bdevs_discovered": 3, 00:14:12.650 "num_base_bdevs_operational": 3, 00:14:12.650 "base_bdevs_list": [ 00:14:12.650 { 00:14:12.650 "name": "spare", 00:14:12.650 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:12.650 "is_configured": true, 00:14:12.650 "data_offset": 2048, 00:14:12.650 "data_size": 63488 00:14:12.650 }, 00:14:12.650 { 00:14:12.650 "name": null, 00:14:12.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.650 "is_configured": false, 00:14:12.650 "data_offset": 0, 00:14:12.650 "data_size": 63488 00:14:12.650 }, 00:14:12.650 { 00:14:12.650 "name": "BaseBdev3", 00:14:12.650 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:12.650 "is_configured": true, 00:14:12.650 "data_offset": 2048, 00:14:12.650 "data_size": 63488 00:14:12.650 }, 00:14:12.650 { 00:14:12.650 "name": "BaseBdev4", 00:14:12.650 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:12.650 "is_configured": true, 00:14:12.650 "data_offset": 2048, 00:14:12.650 "data_size": 63488 00:14:12.650 } 00:14:12.650 ] 00:14:12.650 }' 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.650 21:21:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.170 73.62 IOPS, 220.88 MiB/s [2024-11-26T21:21:31.326Z] 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.170 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.170 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.170 [2024-11-26 21:21:31.123649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.170 [2024-11-26 21:21:31.123759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.170 00:14:13.170 Latency(us) 00:14:13.170 [2024-11-26T21:21:31.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.170 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:13.170 raid_bdev1 : 8.19 72.27 216.82 0.00 0.00 18528.93 309.44 119968.08 00:14:13.170 [2024-11-26T21:21:31.326Z] =================================================================================================================== 00:14:13.170 [2024-11-26T21:21:31.326Z] Total : 72.27 216.82 0.00 0.00 18528.93 309.44 119968.08 00:14:13.170 [2024-11-26 21:21:31.243255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.171 [2024-11-26 21:21:31.243359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.171 [2024-11-26 21:21:31.243474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.171 [2024-11-26 21:21:31.243517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:13.171 { 00:14:13.171 "results": [ 00:14:13.171 { 00:14:13.171 "job": "raid_bdev1", 00:14:13.171 "core_mask": "0x1", 00:14:13.171 "workload": "randrw", 00:14:13.171 "percentage": 50, 00:14:13.171 "status": "finished", 00:14:13.171 "queue_depth": 2, 00:14:13.171 "io_size": 3145728, 00:14:13.171 "runtime": 8.191209, 00:14:13.171 "iops": 72.27260347037904, 00:14:13.171 "mibps": 216.81781041113712, 00:14:13.171 "io_failed": 0, 00:14:13.171 "io_timeout": 0, 00:14:13.171 "avg_latency_us": 18528.932845509262, 00:14:13.171 "min_latency_us": 309.435807860262, 00:14:13.171 "max_latency_us": 119968.08384279476 00:14:13.171 } 00:14:13.171 ], 00:14:13.171 "core_count": 1 00:14:13.171 } 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.171 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:13.431 /dev/nbd0 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.431 1+0 records in 00:14:13.431 1+0 records out 00:14:13.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398236 s, 10.3 MB/s 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.431 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:13.691 /dev/nbd1 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.691 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.691 1+0 records in 00:14:13.691 1+0 records out 00:14:13.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311501 s, 13.1 MB/s 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.692 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.951 21:21:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.211 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:14.472 /dev/nbd1 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.472 1+0 records in 00:14:14.472 1+0 records out 00:14:14.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372359 s, 11.0 MB/s 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.472 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.732 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.993 [2024-11-26 21:21:32.980434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.993 [2024-11-26 21:21:32.980566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.993 [2024-11-26 21:21:32.980609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:14.993 [2024-11-26 21:21:32.980640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.993 [2024-11-26 21:21:32.982992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.993 [2024-11-26 21:21:32.983059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.993 [2024-11-26 21:21:32.983167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:14.993 [2024-11-26 21:21:32.983235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.993 [2024-11-26 21:21:32.983380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.993 [2024-11-26 21:21:32.983530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.993 spare 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.993 21:21:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.993 [2024-11-26 21:21:33.083463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:14.993 [2024-11-26 21:21:33.083528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.993 [2024-11-26 21:21:33.083812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:14.993 [2024-11-26 21:21:33.084032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:14.993 [2024-11-26 21:21:33.084080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:14.993 [2024-11-26 21:21:33.084303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.993 "name": "raid_bdev1", 00:14:14.993 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:14.993 "strip_size_kb": 0, 00:14:14.993 "state": "online", 00:14:14.993 "raid_level": "raid1", 00:14:14.993 "superblock": true, 00:14:14.993 "num_base_bdevs": 4, 00:14:14.993 "num_base_bdevs_discovered": 3, 00:14:14.993 "num_base_bdevs_operational": 3, 00:14:14.993 "base_bdevs_list": [ 00:14:14.993 { 00:14:14.993 "name": "spare", 00:14:14.993 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:14.993 "is_configured": true, 00:14:14.993 "data_offset": 2048, 00:14:14.993 "data_size": 63488 00:14:14.993 }, 00:14:14.993 { 00:14:14.993 "name": null, 00:14:14.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.993 "is_configured": false, 00:14:14.993 "data_offset": 2048, 00:14:14.993 "data_size": 63488 00:14:14.993 }, 00:14:14.993 { 00:14:14.993 "name": "BaseBdev3", 00:14:14.993 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:14.993 "is_configured": true, 00:14:14.993 "data_offset": 2048, 00:14:14.993 "data_size": 63488 00:14:14.993 }, 00:14:14.993 { 00:14:14.993 "name": "BaseBdev4", 00:14:14.993 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:14.993 "is_configured": true, 00:14:14.993 "data_offset": 2048, 00:14:14.993 "data_size": 63488 00:14:14.993 } 00:14:14.993 ] 00:14:14.993 }' 00:14:14.993 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.994 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.563 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.564 "name": "raid_bdev1", 00:14:15.564 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:15.564 "strip_size_kb": 0, 00:14:15.564 "state": "online", 00:14:15.564 "raid_level": "raid1", 00:14:15.564 "superblock": true, 00:14:15.564 "num_base_bdevs": 4, 00:14:15.564 "num_base_bdevs_discovered": 3, 00:14:15.564 "num_base_bdevs_operational": 3, 00:14:15.564 "base_bdevs_list": [ 00:14:15.564 { 00:14:15.564 "name": "spare", 00:14:15.564 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:15.564 "is_configured": true, 00:14:15.564 "data_offset": 2048, 00:14:15.564 "data_size": 63488 00:14:15.564 }, 00:14:15.564 { 00:14:15.564 "name": null, 00:14:15.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.564 "is_configured": false, 00:14:15.564 "data_offset": 2048, 00:14:15.564 "data_size": 63488 00:14:15.564 }, 00:14:15.564 { 00:14:15.564 "name": "BaseBdev3", 00:14:15.564 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:15.564 "is_configured": true, 00:14:15.564 "data_offset": 2048, 00:14:15.564 "data_size": 63488 00:14:15.564 }, 00:14:15.564 { 00:14:15.564 "name": "BaseBdev4", 00:14:15.564 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:15.564 "is_configured": true, 00:14:15.564 "data_offset": 2048, 00:14:15.564 "data_size": 63488 00:14:15.564 } 00:14:15.564 ] 00:14:15.564 }' 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.564 [2024-11-26 21:21:33.707527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.564 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.824 "name": "raid_bdev1", 00:14:15.824 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:15.824 "strip_size_kb": 0, 00:14:15.824 "state": "online", 00:14:15.824 "raid_level": "raid1", 00:14:15.824 "superblock": true, 00:14:15.824 "num_base_bdevs": 4, 00:14:15.824 "num_base_bdevs_discovered": 2, 00:14:15.824 "num_base_bdevs_operational": 2, 00:14:15.824 "base_bdevs_list": [ 00:14:15.824 { 00:14:15.824 "name": null, 00:14:15.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.824 "is_configured": false, 00:14:15.824 "data_offset": 0, 00:14:15.824 "data_size": 63488 00:14:15.824 }, 00:14:15.824 { 00:14:15.824 "name": null, 00:14:15.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.824 "is_configured": false, 00:14:15.824 "data_offset": 2048, 00:14:15.824 "data_size": 63488 00:14:15.824 }, 00:14:15.824 { 00:14:15.824 "name": "BaseBdev3", 00:14:15.824 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:15.824 "is_configured": true, 00:14:15.824 "data_offset": 2048, 00:14:15.824 "data_size": 63488 00:14:15.824 }, 00:14:15.824 { 00:14:15.824 "name": "BaseBdev4", 00:14:15.824 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:15.824 "is_configured": true, 00:14:15.824 "data_offset": 2048, 00:14:15.824 "data_size": 63488 00:14:15.824 } 00:14:15.824 ] 00:14:15.824 }' 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.824 21:21:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 21:21:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.084 21:21:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.084 21:21:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 [2024-11-26 21:21:34.147031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.084 [2024-11-26 21:21:34.147199] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:16.084 [2024-11-26 21:21:34.147251] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:16.084 [2024-11-26 21:21:34.147300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.084 [2024-11-26 21:21:34.161919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:16.084 21:21:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.084 21:21:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:16.084 [2024-11-26 21:21:34.163980] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.024 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.284 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.285 "name": "raid_bdev1", 00:14:17.285 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:17.285 "strip_size_kb": 0, 00:14:17.285 "state": "online", 00:14:17.285 "raid_level": "raid1", 00:14:17.285 "superblock": true, 00:14:17.285 "num_base_bdevs": 4, 00:14:17.285 "num_base_bdevs_discovered": 3, 00:14:17.285 "num_base_bdevs_operational": 3, 00:14:17.285 "process": { 00:14:17.285 "type": "rebuild", 00:14:17.285 "target": "spare", 00:14:17.285 "progress": { 00:14:17.285 "blocks": 20480, 00:14:17.285 "percent": 32 00:14:17.285 } 00:14:17.285 }, 00:14:17.285 "base_bdevs_list": [ 00:14:17.285 { 00:14:17.285 "name": "spare", 00:14:17.285 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:17.285 "is_configured": true, 00:14:17.285 "data_offset": 2048, 00:14:17.285 "data_size": 63488 00:14:17.285 }, 00:14:17.285 { 00:14:17.285 "name": null, 00:14:17.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.285 "is_configured": false, 00:14:17.285 "data_offset": 2048, 00:14:17.285 "data_size": 63488 00:14:17.285 }, 00:14:17.285 { 00:14:17.285 "name": "BaseBdev3", 00:14:17.285 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:17.285 "is_configured": true, 00:14:17.285 "data_offset": 2048, 00:14:17.285 "data_size": 63488 00:14:17.285 }, 00:14:17.285 { 00:14:17.285 "name": "BaseBdev4", 00:14:17.285 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:17.285 "is_configured": true, 00:14:17.285 "data_offset": 2048, 00:14:17.285 "data_size": 63488 00:14:17.285 } 00:14:17.285 ] 00:14:17.285 }' 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.285 [2024-11-26 21:21:35.324471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.285 [2024-11-26 21:21:35.372320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:17.285 [2024-11-26 21:21:35.372441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.285 [2024-11-26 21:21:35.372462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.285 [2024-11-26 21:21:35.372471] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.285 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.583 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.583 "name": "raid_bdev1", 00:14:17.583 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:17.583 "strip_size_kb": 0, 00:14:17.583 "state": "online", 00:14:17.583 "raid_level": "raid1", 00:14:17.583 "superblock": true, 00:14:17.583 "num_base_bdevs": 4, 00:14:17.583 "num_base_bdevs_discovered": 2, 00:14:17.583 "num_base_bdevs_operational": 2, 00:14:17.583 "base_bdevs_list": [ 00:14:17.583 { 00:14:17.583 "name": null, 00:14:17.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.583 "is_configured": false, 00:14:17.583 "data_offset": 0, 00:14:17.583 "data_size": 63488 00:14:17.583 }, 00:14:17.583 { 00:14:17.583 "name": null, 00:14:17.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.583 "is_configured": false, 00:14:17.583 "data_offset": 2048, 00:14:17.584 "data_size": 63488 00:14:17.584 }, 00:14:17.584 { 00:14:17.584 "name": "BaseBdev3", 00:14:17.584 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:17.584 "is_configured": true, 00:14:17.584 "data_offset": 2048, 00:14:17.584 "data_size": 63488 00:14:17.584 }, 00:14:17.584 { 00:14:17.584 "name": "BaseBdev4", 00:14:17.584 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:17.584 "is_configured": true, 00:14:17.584 "data_offset": 2048, 00:14:17.584 "data_size": 63488 00:14:17.584 } 00:14:17.584 ] 00:14:17.584 }' 00:14:17.584 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.584 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.844 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.844 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.844 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.844 [2024-11-26 21:21:35.828267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.844 [2024-11-26 21:21:35.828385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.844 [2024-11-26 21:21:35.828434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:17.844 [2024-11-26 21:21:35.828462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.844 [2024-11-26 21:21:35.828998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.844 [2024-11-26 21:21:35.829060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.844 [2024-11-26 21:21:35.829177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:17.844 [2024-11-26 21:21:35.829215] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:17.844 [2024-11-26 21:21:35.829256] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:17.844 [2024-11-26 21:21:35.829343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.844 [2024-11-26 21:21:35.843370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:17.844 spare 00:14:17.844 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.844 21:21:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:17.844 [2024-11-26 21:21:35.845433] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.784 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.784 "name": "raid_bdev1", 00:14:18.784 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:18.784 "strip_size_kb": 0, 00:14:18.784 "state": "online", 00:14:18.784 "raid_level": "raid1", 00:14:18.784 "superblock": true, 00:14:18.784 "num_base_bdevs": 4, 00:14:18.784 "num_base_bdevs_discovered": 3, 00:14:18.784 "num_base_bdevs_operational": 3, 00:14:18.784 "process": { 00:14:18.784 "type": "rebuild", 00:14:18.784 "target": "spare", 00:14:18.784 "progress": { 00:14:18.784 "blocks": 20480, 00:14:18.784 "percent": 32 00:14:18.784 } 00:14:18.784 }, 00:14:18.784 "base_bdevs_list": [ 00:14:18.784 { 00:14:18.784 "name": "spare", 00:14:18.784 "uuid": "1e99b029-ef4e-5b88-abd9-892615fd6793", 00:14:18.784 "is_configured": true, 00:14:18.784 "data_offset": 2048, 00:14:18.784 "data_size": 63488 00:14:18.784 }, 00:14:18.784 { 00:14:18.784 "name": null, 00:14:18.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.784 "is_configured": false, 00:14:18.784 "data_offset": 2048, 00:14:18.784 "data_size": 63488 00:14:18.784 }, 00:14:18.784 { 00:14:18.784 "name": "BaseBdev3", 00:14:18.784 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:18.784 "is_configured": true, 00:14:18.784 "data_offset": 2048, 00:14:18.785 "data_size": 63488 00:14:18.785 }, 00:14:18.785 { 00:14:18.785 "name": "BaseBdev4", 00:14:18.785 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:18.785 "is_configured": true, 00:14:18.785 "data_offset": 2048, 00:14:18.785 "data_size": 63488 00:14:18.785 } 00:14:18.785 ] 00:14:18.785 }' 00:14:18.785 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.045 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.045 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.045 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.045 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:19.045 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.045 21:21:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.045 [2024-11-26 21:21:36.984920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.045 [2024-11-26 21:21:37.053537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.045 [2024-11-26 21:21:37.053664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.045 [2024-11-26 21:21:37.053682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.045 [2024-11-26 21:21:37.053696] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.045 "name": "raid_bdev1", 00:14:19.045 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:19.045 "strip_size_kb": 0, 00:14:19.045 "state": "online", 00:14:19.045 "raid_level": "raid1", 00:14:19.045 "superblock": true, 00:14:19.045 "num_base_bdevs": 4, 00:14:19.045 "num_base_bdevs_discovered": 2, 00:14:19.045 "num_base_bdevs_operational": 2, 00:14:19.045 "base_bdevs_list": [ 00:14:19.045 { 00:14:19.045 "name": null, 00:14:19.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.045 "is_configured": false, 00:14:19.045 "data_offset": 0, 00:14:19.045 "data_size": 63488 00:14:19.045 }, 00:14:19.045 { 00:14:19.045 "name": null, 00:14:19.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.045 "is_configured": false, 00:14:19.045 "data_offset": 2048, 00:14:19.045 "data_size": 63488 00:14:19.045 }, 00:14:19.045 { 00:14:19.045 "name": "BaseBdev3", 00:14:19.045 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:19.045 "is_configured": true, 00:14:19.045 "data_offset": 2048, 00:14:19.045 "data_size": 63488 00:14:19.045 }, 00:14:19.045 { 00:14:19.045 "name": "BaseBdev4", 00:14:19.045 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:19.045 "is_configured": true, 00:14:19.045 "data_offset": 2048, 00:14:19.045 "data_size": 63488 00:14:19.045 } 00:14:19.045 ] 00:14:19.045 }' 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.045 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.616 "name": "raid_bdev1", 00:14:19.616 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:19.616 "strip_size_kb": 0, 00:14:19.616 "state": "online", 00:14:19.616 "raid_level": "raid1", 00:14:19.616 "superblock": true, 00:14:19.616 "num_base_bdevs": 4, 00:14:19.616 "num_base_bdevs_discovered": 2, 00:14:19.616 "num_base_bdevs_operational": 2, 00:14:19.616 "base_bdevs_list": [ 00:14:19.616 { 00:14:19.616 "name": null, 00:14:19.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.616 "is_configured": false, 00:14:19.616 "data_offset": 0, 00:14:19.616 "data_size": 63488 00:14:19.616 }, 00:14:19.616 { 00:14:19.616 "name": null, 00:14:19.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.616 "is_configured": false, 00:14:19.616 "data_offset": 2048, 00:14:19.616 "data_size": 63488 00:14:19.616 }, 00:14:19.616 { 00:14:19.616 "name": "BaseBdev3", 00:14:19.616 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:19.616 "is_configured": true, 00:14:19.616 "data_offset": 2048, 00:14:19.616 "data_size": 63488 00:14:19.616 }, 00:14:19.616 { 00:14:19.616 "name": "BaseBdev4", 00:14:19.616 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:19.616 "is_configured": true, 00:14:19.616 "data_offset": 2048, 00:14:19.616 "data_size": 63488 00:14:19.616 } 00:14:19.616 ] 00:14:19.616 }' 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.616 [2024-11-26 21:21:37.669064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.616 [2024-11-26 21:21:37.669118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.616 [2024-11-26 21:21:37.669137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:19.616 [2024-11-26 21:21:37.669148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.616 [2024-11-26 21:21:37.669605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.616 [2024-11-26 21:21:37.669638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.616 [2024-11-26 21:21:37.669710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:19.616 [2024-11-26 21:21:37.669733] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:19.616 [2024-11-26 21:21:37.669741] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:19.616 [2024-11-26 21:21:37.669754] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:19.616 BaseBdev1 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.616 21:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.557 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.817 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.817 "name": "raid_bdev1", 00:14:20.817 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:20.817 "strip_size_kb": 0, 00:14:20.817 "state": "online", 00:14:20.817 "raid_level": "raid1", 00:14:20.817 "superblock": true, 00:14:20.817 "num_base_bdevs": 4, 00:14:20.817 "num_base_bdevs_discovered": 2, 00:14:20.817 "num_base_bdevs_operational": 2, 00:14:20.817 "base_bdevs_list": [ 00:14:20.817 { 00:14:20.817 "name": null, 00:14:20.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.817 "is_configured": false, 00:14:20.817 "data_offset": 0, 00:14:20.817 "data_size": 63488 00:14:20.817 }, 00:14:20.817 { 00:14:20.817 "name": null, 00:14:20.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.817 "is_configured": false, 00:14:20.817 "data_offset": 2048, 00:14:20.817 "data_size": 63488 00:14:20.817 }, 00:14:20.817 { 00:14:20.817 "name": "BaseBdev3", 00:14:20.817 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:20.817 "is_configured": true, 00:14:20.817 "data_offset": 2048, 00:14:20.817 "data_size": 63488 00:14:20.817 }, 00:14:20.817 { 00:14:20.817 "name": "BaseBdev4", 00:14:20.817 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:20.817 "is_configured": true, 00:14:20.817 "data_offset": 2048, 00:14:20.817 "data_size": 63488 00:14:20.817 } 00:14:20.817 ] 00:14:20.817 }' 00:14:20.817 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.817 21:21:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.077 "name": "raid_bdev1", 00:14:21.077 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:21.077 "strip_size_kb": 0, 00:14:21.077 "state": "online", 00:14:21.077 "raid_level": "raid1", 00:14:21.077 "superblock": true, 00:14:21.077 "num_base_bdevs": 4, 00:14:21.077 "num_base_bdevs_discovered": 2, 00:14:21.077 "num_base_bdevs_operational": 2, 00:14:21.077 "base_bdevs_list": [ 00:14:21.077 { 00:14:21.077 "name": null, 00:14:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.077 "is_configured": false, 00:14:21.077 "data_offset": 0, 00:14:21.077 "data_size": 63488 00:14:21.077 }, 00:14:21.077 { 00:14:21.077 "name": null, 00:14:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.077 "is_configured": false, 00:14:21.077 "data_offset": 2048, 00:14:21.077 "data_size": 63488 00:14:21.077 }, 00:14:21.077 { 00:14:21.077 "name": "BaseBdev3", 00:14:21.077 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:21.077 "is_configured": true, 00:14:21.077 "data_offset": 2048, 00:14:21.077 "data_size": 63488 00:14:21.077 }, 00:14:21.077 { 00:14:21.077 "name": "BaseBdev4", 00:14:21.077 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:21.077 "is_configured": true, 00:14:21.077 "data_offset": 2048, 00:14:21.077 "data_size": 63488 00:14:21.077 } 00:14:21.077 ] 00:14:21.077 }' 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.077 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.336 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.337 [2024-11-26 21:21:39.278541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.337 [2024-11-26 21:21:39.278657] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:21.337 [2024-11-26 21:21:39.278668] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:21.337 request: 00:14:21.337 { 00:14:21.337 "base_bdev": "BaseBdev1", 00:14:21.337 "raid_bdev": "raid_bdev1", 00:14:21.337 "method": "bdev_raid_add_base_bdev", 00:14:21.337 "req_id": 1 00:14:21.337 } 00:14:21.337 Got JSON-RPC error response 00:14:21.337 response: 00:14:21.337 { 00:14:21.337 "code": -22, 00:14:21.337 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:21.337 } 00:14:21.337 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:21.337 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:21.337 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.337 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.337 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.337 21:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.273 "name": "raid_bdev1", 00:14:22.273 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:22.273 "strip_size_kb": 0, 00:14:22.273 "state": "online", 00:14:22.273 "raid_level": "raid1", 00:14:22.273 "superblock": true, 00:14:22.273 "num_base_bdevs": 4, 00:14:22.273 "num_base_bdevs_discovered": 2, 00:14:22.273 "num_base_bdevs_operational": 2, 00:14:22.273 "base_bdevs_list": [ 00:14:22.273 { 00:14:22.273 "name": null, 00:14:22.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.273 "is_configured": false, 00:14:22.273 "data_offset": 0, 00:14:22.273 "data_size": 63488 00:14:22.273 }, 00:14:22.273 { 00:14:22.273 "name": null, 00:14:22.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.273 "is_configured": false, 00:14:22.273 "data_offset": 2048, 00:14:22.273 "data_size": 63488 00:14:22.273 }, 00:14:22.273 { 00:14:22.273 "name": "BaseBdev3", 00:14:22.273 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:22.273 "is_configured": true, 00:14:22.273 "data_offset": 2048, 00:14:22.273 "data_size": 63488 00:14:22.273 }, 00:14:22.273 { 00:14:22.273 "name": "BaseBdev4", 00:14:22.273 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:22.273 "is_configured": true, 00:14:22.273 "data_offset": 2048, 00:14:22.273 "data_size": 63488 00:14:22.273 } 00:14:22.273 ] 00:14:22.273 }' 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.273 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.842 "name": "raid_bdev1", 00:14:22.842 "uuid": "9609ef6c-79eb-4fe4-a984-cb03e853583c", 00:14:22.842 "strip_size_kb": 0, 00:14:22.842 "state": "online", 00:14:22.842 "raid_level": "raid1", 00:14:22.842 "superblock": true, 00:14:22.842 "num_base_bdevs": 4, 00:14:22.842 "num_base_bdevs_discovered": 2, 00:14:22.842 "num_base_bdevs_operational": 2, 00:14:22.842 "base_bdevs_list": [ 00:14:22.842 { 00:14:22.842 "name": null, 00:14:22.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.842 "is_configured": false, 00:14:22.842 "data_offset": 0, 00:14:22.842 "data_size": 63488 00:14:22.842 }, 00:14:22.842 { 00:14:22.842 "name": null, 00:14:22.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.842 "is_configured": false, 00:14:22.842 "data_offset": 2048, 00:14:22.842 "data_size": 63488 00:14:22.842 }, 00:14:22.842 { 00:14:22.842 "name": "BaseBdev3", 00:14:22.842 "uuid": "1dc1db9b-1b8b-528c-b7b7-e26dae98ef64", 00:14:22.842 "is_configured": true, 00:14:22.842 "data_offset": 2048, 00:14:22.842 "data_size": 63488 00:14:22.842 }, 00:14:22.842 { 00:14:22.842 "name": "BaseBdev4", 00:14:22.842 "uuid": "ce06d177-a9bd-5807-a64d-7e6c548955bd", 00:14:22.842 "is_configured": true, 00:14:22.842 "data_offset": 2048, 00:14:22.842 "data_size": 63488 00:14:22.842 } 00:14:22.842 ] 00:14:22.842 }' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78965 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78965 ']' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78965 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78965 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78965' 00:14:22.842 killing process with pid 78965 00:14:22.842 Received shutdown signal, test time was about 17.922443 seconds 00:14:22.842 00:14:22.842 Latency(us) 00:14:22.842 [2024-11-26T21:21:40.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.842 [2024-11-26T21:21:40.998Z] =================================================================================================================== 00:14:22.842 [2024-11-26T21:21:40.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78965 00:14:22.842 [2024-11-26 21:21:40.935881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.842 [2024-11-26 21:21:40.935988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.842 [2024-11-26 21:21:40.936061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.842 [2024-11-26 21:21:40.936071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:22.842 21:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78965 00:14:23.412 [2024-11-26 21:21:41.369360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.792 21:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:24.792 00:14:24.792 real 0m21.442s 00:14:24.792 user 0m27.712s 00:14:24.792 sys 0m2.681s 00:14:24.792 21:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.792 21:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 ************************************ 00:14:24.792 END TEST raid_rebuild_test_sb_io 00:14:24.792 ************************************ 00:14:24.792 21:21:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:24.792 21:21:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:24.792 21:21:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:24.792 21:21:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.792 21:21:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 ************************************ 00:14:24.792 START TEST raid5f_state_function_test 00:14:24.792 ************************************ 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:24.792 Process raid pid: 79687 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79687 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79687' 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79687 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79687 ']' 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.792 21:21:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 [2024-11-26 21:21:42.762329] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:24.792 [2024-11-26 21:21:42.762511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.792 [2024-11-26 21:21:42.937301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.052 [2024-11-26 21:21:43.068799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.311 [2024-11-26 21:21:43.285543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.311 [2024-11-26 21:21:43.285584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.571 [2024-11-26 21:21:43.590705] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.571 [2024-11-26 21:21:43.590769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.571 [2024-11-26 21:21:43.590779] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.571 [2024-11-26 21:21:43.590788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.571 [2024-11-26 21:21:43.590794] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:25.571 [2024-11-26 21:21:43.590802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.571 "name": "Existed_Raid", 00:14:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.571 "strip_size_kb": 64, 00:14:25.571 "state": "configuring", 00:14:25.571 "raid_level": "raid5f", 00:14:25.571 "superblock": false, 00:14:25.571 "num_base_bdevs": 3, 00:14:25.571 "num_base_bdevs_discovered": 0, 00:14:25.571 "num_base_bdevs_operational": 3, 00:14:25.571 "base_bdevs_list": [ 00:14:25.571 { 00:14:25.571 "name": "BaseBdev1", 00:14:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.571 "is_configured": false, 00:14:25.571 "data_offset": 0, 00:14:25.571 "data_size": 0 00:14:25.571 }, 00:14:25.571 { 00:14:25.571 "name": "BaseBdev2", 00:14:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.571 "is_configured": false, 00:14:25.571 "data_offset": 0, 00:14:25.571 "data_size": 0 00:14:25.571 }, 00:14:25.571 { 00:14:25.571 "name": "BaseBdev3", 00:14:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.571 "is_configured": false, 00:14:25.571 "data_offset": 0, 00:14:25.571 "data_size": 0 00:14:25.571 } 00:14:25.571 ] 00:14:25.571 }' 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.571 21:21:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.142 [2024-11-26 21:21:44.041866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.142 [2024-11-26 21:21:44.042002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.142 [2024-11-26 21:21:44.053844] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.142 [2024-11-26 21:21:44.053925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.142 [2024-11-26 21:21:44.053952] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.142 [2024-11-26 21:21:44.053984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.142 [2024-11-26 21:21:44.054027] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.142 [2024-11-26 21:21:44.054050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.142 [2024-11-26 21:21:44.108773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.142 BaseBdev1 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.142 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.143 [ 00:14:26.143 { 00:14:26.143 "name": "BaseBdev1", 00:14:26.143 "aliases": [ 00:14:26.143 "3cf2a352-e898-49be-a825-ae798acb39ca" 00:14:26.143 ], 00:14:26.143 "product_name": "Malloc disk", 00:14:26.143 "block_size": 512, 00:14:26.143 "num_blocks": 65536, 00:14:26.143 "uuid": "3cf2a352-e898-49be-a825-ae798acb39ca", 00:14:26.143 "assigned_rate_limits": { 00:14:26.143 "rw_ios_per_sec": 0, 00:14:26.143 "rw_mbytes_per_sec": 0, 00:14:26.143 "r_mbytes_per_sec": 0, 00:14:26.143 "w_mbytes_per_sec": 0 00:14:26.143 }, 00:14:26.143 "claimed": true, 00:14:26.143 "claim_type": "exclusive_write", 00:14:26.143 "zoned": false, 00:14:26.143 "supported_io_types": { 00:14:26.143 "read": true, 00:14:26.143 "write": true, 00:14:26.143 "unmap": true, 00:14:26.143 "flush": true, 00:14:26.143 "reset": true, 00:14:26.143 "nvme_admin": false, 00:14:26.143 "nvme_io": false, 00:14:26.143 "nvme_io_md": false, 00:14:26.143 "write_zeroes": true, 00:14:26.143 "zcopy": true, 00:14:26.143 "get_zone_info": false, 00:14:26.143 "zone_management": false, 00:14:26.143 "zone_append": false, 00:14:26.143 "compare": false, 00:14:26.143 "compare_and_write": false, 00:14:26.143 "abort": true, 00:14:26.143 "seek_hole": false, 00:14:26.143 "seek_data": false, 00:14:26.143 "copy": true, 00:14:26.143 "nvme_iov_md": false 00:14:26.143 }, 00:14:26.143 "memory_domains": [ 00:14:26.143 { 00:14:26.143 "dma_device_id": "system", 00:14:26.143 "dma_device_type": 1 00:14:26.143 }, 00:14:26.143 { 00:14:26.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.143 "dma_device_type": 2 00:14:26.143 } 00:14:26.143 ], 00:14:26.143 "driver_specific": {} 00:14:26.143 } 00:14:26.143 ] 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.143 "name": "Existed_Raid", 00:14:26.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.143 "strip_size_kb": 64, 00:14:26.143 "state": "configuring", 00:14:26.143 "raid_level": "raid5f", 00:14:26.143 "superblock": false, 00:14:26.143 "num_base_bdevs": 3, 00:14:26.143 "num_base_bdevs_discovered": 1, 00:14:26.143 "num_base_bdevs_operational": 3, 00:14:26.143 "base_bdevs_list": [ 00:14:26.143 { 00:14:26.143 "name": "BaseBdev1", 00:14:26.143 "uuid": "3cf2a352-e898-49be-a825-ae798acb39ca", 00:14:26.143 "is_configured": true, 00:14:26.143 "data_offset": 0, 00:14:26.143 "data_size": 65536 00:14:26.143 }, 00:14:26.143 { 00:14:26.143 "name": "BaseBdev2", 00:14:26.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.143 "is_configured": false, 00:14:26.143 "data_offset": 0, 00:14:26.143 "data_size": 0 00:14:26.143 }, 00:14:26.143 { 00:14:26.143 "name": "BaseBdev3", 00:14:26.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.143 "is_configured": false, 00:14:26.143 "data_offset": 0, 00:14:26.143 "data_size": 0 00:14:26.143 } 00:14:26.143 ] 00:14:26.143 }' 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.143 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.713 [2024-11-26 21:21:44.572071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.713 [2024-11-26 21:21:44.572117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.713 [2024-11-26 21:21:44.584116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.713 [2024-11-26 21:21:44.586045] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.713 [2024-11-26 21:21:44.586084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.713 [2024-11-26 21:21:44.586094] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.713 [2024-11-26 21:21:44.586102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.713 "name": "Existed_Raid", 00:14:26.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.713 "strip_size_kb": 64, 00:14:26.713 "state": "configuring", 00:14:26.713 "raid_level": "raid5f", 00:14:26.713 "superblock": false, 00:14:26.713 "num_base_bdevs": 3, 00:14:26.713 "num_base_bdevs_discovered": 1, 00:14:26.713 "num_base_bdevs_operational": 3, 00:14:26.713 "base_bdevs_list": [ 00:14:26.713 { 00:14:26.713 "name": "BaseBdev1", 00:14:26.713 "uuid": "3cf2a352-e898-49be-a825-ae798acb39ca", 00:14:26.713 "is_configured": true, 00:14:26.713 "data_offset": 0, 00:14:26.713 "data_size": 65536 00:14:26.713 }, 00:14:26.713 { 00:14:26.713 "name": "BaseBdev2", 00:14:26.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.713 "is_configured": false, 00:14:26.713 "data_offset": 0, 00:14:26.713 "data_size": 0 00:14:26.713 }, 00:14:26.713 { 00:14:26.713 "name": "BaseBdev3", 00:14:26.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.713 "is_configured": false, 00:14:26.713 "data_offset": 0, 00:14:26.713 "data_size": 0 00:14:26.713 } 00:14:26.713 ] 00:14:26.713 }' 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.713 21:21:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.974 [2024-11-26 21:21:45.087724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.974 BaseBdev2 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.974 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.974 [ 00:14:26.974 { 00:14:26.974 "name": "BaseBdev2", 00:14:26.974 "aliases": [ 00:14:26.974 "bad85b39-d433-404e-b914-bf41701da4c5" 00:14:26.974 ], 00:14:26.974 "product_name": "Malloc disk", 00:14:26.974 "block_size": 512, 00:14:26.974 "num_blocks": 65536, 00:14:26.974 "uuid": "bad85b39-d433-404e-b914-bf41701da4c5", 00:14:26.974 "assigned_rate_limits": { 00:14:26.974 "rw_ios_per_sec": 0, 00:14:26.974 "rw_mbytes_per_sec": 0, 00:14:26.974 "r_mbytes_per_sec": 0, 00:14:26.974 "w_mbytes_per_sec": 0 00:14:26.974 }, 00:14:26.974 "claimed": true, 00:14:26.974 "claim_type": "exclusive_write", 00:14:26.974 "zoned": false, 00:14:26.974 "supported_io_types": { 00:14:26.974 "read": true, 00:14:26.974 "write": true, 00:14:26.974 "unmap": true, 00:14:26.974 "flush": true, 00:14:26.974 "reset": true, 00:14:26.974 "nvme_admin": false, 00:14:26.974 "nvme_io": false, 00:14:26.974 "nvme_io_md": false, 00:14:26.974 "write_zeroes": true, 00:14:26.974 "zcopy": true, 00:14:26.974 "get_zone_info": false, 00:14:26.974 "zone_management": false, 00:14:26.974 "zone_append": false, 00:14:26.974 "compare": false, 00:14:26.974 "compare_and_write": false, 00:14:26.974 "abort": true, 00:14:26.974 "seek_hole": false, 00:14:26.974 "seek_data": false, 00:14:26.974 "copy": true, 00:14:26.974 "nvme_iov_md": false 00:14:26.974 }, 00:14:26.974 "memory_domains": [ 00:14:26.974 { 00:14:26.974 "dma_device_id": "system", 00:14:26.974 "dma_device_type": 1 00:14:26.974 }, 00:14:26.974 { 00:14:26.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.974 "dma_device_type": 2 00:14:26.974 } 00:14:26.974 ], 00:14:26.974 "driver_specific": {} 00:14:26.974 } 00:14:26.974 ] 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.234 "name": "Existed_Raid", 00:14:27.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.234 "strip_size_kb": 64, 00:14:27.234 "state": "configuring", 00:14:27.234 "raid_level": "raid5f", 00:14:27.234 "superblock": false, 00:14:27.234 "num_base_bdevs": 3, 00:14:27.234 "num_base_bdevs_discovered": 2, 00:14:27.234 "num_base_bdevs_operational": 3, 00:14:27.234 "base_bdevs_list": [ 00:14:27.234 { 00:14:27.234 "name": "BaseBdev1", 00:14:27.234 "uuid": "3cf2a352-e898-49be-a825-ae798acb39ca", 00:14:27.234 "is_configured": true, 00:14:27.234 "data_offset": 0, 00:14:27.234 "data_size": 65536 00:14:27.234 }, 00:14:27.234 { 00:14:27.234 "name": "BaseBdev2", 00:14:27.234 "uuid": "bad85b39-d433-404e-b914-bf41701da4c5", 00:14:27.234 "is_configured": true, 00:14:27.234 "data_offset": 0, 00:14:27.234 "data_size": 65536 00:14:27.234 }, 00:14:27.234 { 00:14:27.234 "name": "BaseBdev3", 00:14:27.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.234 "is_configured": false, 00:14:27.234 "data_offset": 0, 00:14:27.234 "data_size": 0 00:14:27.234 } 00:14:27.234 ] 00:14:27.234 }' 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.234 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.494 [2024-11-26 21:21:45.638110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.494 [2024-11-26 21:21:45.638192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:27.494 [2024-11-26 21:21:45.638211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:27.494 [2024-11-26 21:21:45.638489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:27.494 [2024-11-26 21:21:45.643965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:27.494 [2024-11-26 21:21:45.643987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:27.494 [2024-11-26 21:21:45.644304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.494 BaseBdev3 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.494 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.753 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.753 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:27.753 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.753 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.753 [ 00:14:27.753 { 00:14:27.753 "name": "BaseBdev3", 00:14:27.753 "aliases": [ 00:14:27.753 "1ec4a7c0-2782-49e3-97c5-9adfcb29f9f5" 00:14:27.753 ], 00:14:27.753 "product_name": "Malloc disk", 00:14:27.753 "block_size": 512, 00:14:27.753 "num_blocks": 65536, 00:14:27.753 "uuid": "1ec4a7c0-2782-49e3-97c5-9adfcb29f9f5", 00:14:27.753 "assigned_rate_limits": { 00:14:27.753 "rw_ios_per_sec": 0, 00:14:27.753 "rw_mbytes_per_sec": 0, 00:14:27.753 "r_mbytes_per_sec": 0, 00:14:27.753 "w_mbytes_per_sec": 0 00:14:27.753 }, 00:14:27.753 "claimed": true, 00:14:27.754 "claim_type": "exclusive_write", 00:14:27.754 "zoned": false, 00:14:27.754 "supported_io_types": { 00:14:27.754 "read": true, 00:14:27.754 "write": true, 00:14:27.754 "unmap": true, 00:14:27.754 "flush": true, 00:14:27.754 "reset": true, 00:14:27.754 "nvme_admin": false, 00:14:27.754 "nvme_io": false, 00:14:27.754 "nvme_io_md": false, 00:14:27.754 "write_zeroes": true, 00:14:27.754 "zcopy": true, 00:14:27.754 "get_zone_info": false, 00:14:27.754 "zone_management": false, 00:14:27.754 "zone_append": false, 00:14:27.754 "compare": false, 00:14:27.754 "compare_and_write": false, 00:14:27.754 "abort": true, 00:14:27.754 "seek_hole": false, 00:14:27.754 "seek_data": false, 00:14:27.754 "copy": true, 00:14:27.754 "nvme_iov_md": false 00:14:27.754 }, 00:14:27.754 "memory_domains": [ 00:14:27.754 { 00:14:27.754 "dma_device_id": "system", 00:14:27.754 "dma_device_type": 1 00:14:27.754 }, 00:14:27.754 { 00:14:27.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.754 "dma_device_type": 2 00:14:27.754 } 00:14:27.754 ], 00:14:27.754 "driver_specific": {} 00:14:27.754 } 00:14:27.754 ] 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.754 "name": "Existed_Raid", 00:14:27.754 "uuid": "050499ea-8038-47dd-9f4f-71b909b957fe", 00:14:27.754 "strip_size_kb": 64, 00:14:27.754 "state": "online", 00:14:27.754 "raid_level": "raid5f", 00:14:27.754 "superblock": false, 00:14:27.754 "num_base_bdevs": 3, 00:14:27.754 "num_base_bdevs_discovered": 3, 00:14:27.754 "num_base_bdevs_operational": 3, 00:14:27.754 "base_bdevs_list": [ 00:14:27.754 { 00:14:27.754 "name": "BaseBdev1", 00:14:27.754 "uuid": "3cf2a352-e898-49be-a825-ae798acb39ca", 00:14:27.754 "is_configured": true, 00:14:27.754 "data_offset": 0, 00:14:27.754 "data_size": 65536 00:14:27.754 }, 00:14:27.754 { 00:14:27.754 "name": "BaseBdev2", 00:14:27.754 "uuid": "bad85b39-d433-404e-b914-bf41701da4c5", 00:14:27.754 "is_configured": true, 00:14:27.754 "data_offset": 0, 00:14:27.754 "data_size": 65536 00:14:27.754 }, 00:14:27.754 { 00:14:27.754 "name": "BaseBdev3", 00:14:27.754 "uuid": "1ec4a7c0-2782-49e3-97c5-9adfcb29f9f5", 00:14:27.754 "is_configured": true, 00:14:27.754 "data_offset": 0, 00:14:27.754 "data_size": 65536 00:14:27.754 } 00:14:27.754 ] 00:14:27.754 }' 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.754 21:21:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.012 [2024-11-26 21:21:46.086771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.012 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.012 "name": "Existed_Raid", 00:14:28.012 "aliases": [ 00:14:28.012 "050499ea-8038-47dd-9f4f-71b909b957fe" 00:14:28.012 ], 00:14:28.012 "product_name": "Raid Volume", 00:14:28.012 "block_size": 512, 00:14:28.012 "num_blocks": 131072, 00:14:28.012 "uuid": "050499ea-8038-47dd-9f4f-71b909b957fe", 00:14:28.012 "assigned_rate_limits": { 00:14:28.012 "rw_ios_per_sec": 0, 00:14:28.012 "rw_mbytes_per_sec": 0, 00:14:28.012 "r_mbytes_per_sec": 0, 00:14:28.012 "w_mbytes_per_sec": 0 00:14:28.012 }, 00:14:28.012 "claimed": false, 00:14:28.012 "zoned": false, 00:14:28.012 "supported_io_types": { 00:14:28.012 "read": true, 00:14:28.012 "write": true, 00:14:28.012 "unmap": false, 00:14:28.012 "flush": false, 00:14:28.012 "reset": true, 00:14:28.012 "nvme_admin": false, 00:14:28.012 "nvme_io": false, 00:14:28.012 "nvme_io_md": false, 00:14:28.012 "write_zeroes": true, 00:14:28.012 "zcopy": false, 00:14:28.012 "get_zone_info": false, 00:14:28.012 "zone_management": false, 00:14:28.012 "zone_append": false, 00:14:28.012 "compare": false, 00:14:28.012 "compare_and_write": false, 00:14:28.012 "abort": false, 00:14:28.012 "seek_hole": false, 00:14:28.013 "seek_data": false, 00:14:28.013 "copy": false, 00:14:28.013 "nvme_iov_md": false 00:14:28.013 }, 00:14:28.013 "driver_specific": { 00:14:28.013 "raid": { 00:14:28.013 "uuid": "050499ea-8038-47dd-9f4f-71b909b957fe", 00:14:28.013 "strip_size_kb": 64, 00:14:28.013 "state": "online", 00:14:28.013 "raid_level": "raid5f", 00:14:28.013 "superblock": false, 00:14:28.013 "num_base_bdevs": 3, 00:14:28.013 "num_base_bdevs_discovered": 3, 00:14:28.013 "num_base_bdevs_operational": 3, 00:14:28.013 "base_bdevs_list": [ 00:14:28.013 { 00:14:28.013 "name": "BaseBdev1", 00:14:28.013 "uuid": "3cf2a352-e898-49be-a825-ae798acb39ca", 00:14:28.013 "is_configured": true, 00:14:28.013 "data_offset": 0, 00:14:28.013 "data_size": 65536 00:14:28.013 }, 00:14:28.013 { 00:14:28.013 "name": "BaseBdev2", 00:14:28.013 "uuid": "bad85b39-d433-404e-b914-bf41701da4c5", 00:14:28.013 "is_configured": true, 00:14:28.013 "data_offset": 0, 00:14:28.013 "data_size": 65536 00:14:28.013 }, 00:14:28.013 { 00:14:28.013 "name": "BaseBdev3", 00:14:28.013 "uuid": "1ec4a7c0-2782-49e3-97c5-9adfcb29f9f5", 00:14:28.013 "is_configured": true, 00:14:28.013 "data_offset": 0, 00:14:28.013 "data_size": 65536 00:14:28.013 } 00:14:28.013 ] 00:14:28.013 } 00:14:28.013 } 00:14:28.013 }' 00:14:28.013 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:28.273 BaseBdev2 00:14:28.273 BaseBdev3' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.273 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.273 [2024-11-26 21:21:46.370144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.533 "name": "Existed_Raid", 00:14:28.533 "uuid": "050499ea-8038-47dd-9f4f-71b909b957fe", 00:14:28.533 "strip_size_kb": 64, 00:14:28.533 "state": "online", 00:14:28.533 "raid_level": "raid5f", 00:14:28.533 "superblock": false, 00:14:28.533 "num_base_bdevs": 3, 00:14:28.533 "num_base_bdevs_discovered": 2, 00:14:28.533 "num_base_bdevs_operational": 2, 00:14:28.533 "base_bdevs_list": [ 00:14:28.533 { 00:14:28.533 "name": null, 00:14:28.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.533 "is_configured": false, 00:14:28.533 "data_offset": 0, 00:14:28.533 "data_size": 65536 00:14:28.533 }, 00:14:28.533 { 00:14:28.533 "name": "BaseBdev2", 00:14:28.533 "uuid": "bad85b39-d433-404e-b914-bf41701da4c5", 00:14:28.533 "is_configured": true, 00:14:28.533 "data_offset": 0, 00:14:28.533 "data_size": 65536 00:14:28.533 }, 00:14:28.533 { 00:14:28.533 "name": "BaseBdev3", 00:14:28.533 "uuid": "1ec4a7c0-2782-49e3-97c5-9adfcb29f9f5", 00:14:28.533 "is_configured": true, 00:14:28.533 "data_offset": 0, 00:14:28.533 "data_size": 65536 00:14:28.533 } 00:14:28.533 ] 00:14:28.533 }' 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.533 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.794 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:28.794 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:28.794 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.794 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:28.794 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.794 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.054 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.054 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:29.054 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.054 21:21:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:29.054 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.054 21:21:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.054 [2024-11-26 21:21:46.984747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.054 [2024-11-26 21:21:46.984931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.054 [2024-11-26 21:21:47.083422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.054 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.054 [2024-11-26 21:21:47.127346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.054 [2024-11-26 21:21:47.127396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.314 BaseBdev2 00:14:29.314 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.315 [ 00:14:29.315 { 00:14:29.315 "name": "BaseBdev2", 00:14:29.315 "aliases": [ 00:14:29.315 "5e74a42e-ba17-43bb-b4d1-a9bf886a0588" 00:14:29.315 ], 00:14:29.315 "product_name": "Malloc disk", 00:14:29.315 "block_size": 512, 00:14:29.315 "num_blocks": 65536, 00:14:29.315 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:29.315 "assigned_rate_limits": { 00:14:29.315 "rw_ios_per_sec": 0, 00:14:29.315 "rw_mbytes_per_sec": 0, 00:14:29.315 "r_mbytes_per_sec": 0, 00:14:29.315 "w_mbytes_per_sec": 0 00:14:29.315 }, 00:14:29.315 "claimed": false, 00:14:29.315 "zoned": false, 00:14:29.315 "supported_io_types": { 00:14:29.315 "read": true, 00:14:29.315 "write": true, 00:14:29.315 "unmap": true, 00:14:29.315 "flush": true, 00:14:29.315 "reset": true, 00:14:29.315 "nvme_admin": false, 00:14:29.315 "nvme_io": false, 00:14:29.315 "nvme_io_md": false, 00:14:29.315 "write_zeroes": true, 00:14:29.315 "zcopy": true, 00:14:29.315 "get_zone_info": false, 00:14:29.315 "zone_management": false, 00:14:29.315 "zone_append": false, 00:14:29.315 "compare": false, 00:14:29.315 "compare_and_write": false, 00:14:29.315 "abort": true, 00:14:29.315 "seek_hole": false, 00:14:29.315 "seek_data": false, 00:14:29.315 "copy": true, 00:14:29.315 "nvme_iov_md": false 00:14:29.315 }, 00:14:29.315 "memory_domains": [ 00:14:29.315 { 00:14:29.315 "dma_device_id": "system", 00:14:29.315 "dma_device_type": 1 00:14:29.315 }, 00:14:29.315 { 00:14:29.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.315 "dma_device_type": 2 00:14:29.315 } 00:14:29.315 ], 00:14:29.315 "driver_specific": {} 00:14:29.315 } 00:14:29.315 ] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.315 BaseBdev3 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.315 [ 00:14:29.315 { 00:14:29.315 "name": "BaseBdev3", 00:14:29.315 "aliases": [ 00:14:29.315 "a01b8cf3-3125-40fc-bc0d-34fd2df25c64" 00:14:29.315 ], 00:14:29.315 "product_name": "Malloc disk", 00:14:29.315 "block_size": 512, 00:14:29.315 "num_blocks": 65536, 00:14:29.315 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:29.315 "assigned_rate_limits": { 00:14:29.315 "rw_ios_per_sec": 0, 00:14:29.315 "rw_mbytes_per_sec": 0, 00:14:29.315 "r_mbytes_per_sec": 0, 00:14:29.315 "w_mbytes_per_sec": 0 00:14:29.315 }, 00:14:29.315 "claimed": false, 00:14:29.315 "zoned": false, 00:14:29.315 "supported_io_types": { 00:14:29.315 "read": true, 00:14:29.315 "write": true, 00:14:29.315 "unmap": true, 00:14:29.315 "flush": true, 00:14:29.315 "reset": true, 00:14:29.315 "nvme_admin": false, 00:14:29.315 "nvme_io": false, 00:14:29.315 "nvme_io_md": false, 00:14:29.315 "write_zeroes": true, 00:14:29.315 "zcopy": true, 00:14:29.315 "get_zone_info": false, 00:14:29.315 "zone_management": false, 00:14:29.315 "zone_append": false, 00:14:29.315 "compare": false, 00:14:29.315 "compare_and_write": false, 00:14:29.315 "abort": true, 00:14:29.315 "seek_hole": false, 00:14:29.315 "seek_data": false, 00:14:29.315 "copy": true, 00:14:29.315 "nvme_iov_md": false 00:14:29.315 }, 00:14:29.315 "memory_domains": [ 00:14:29.315 { 00:14:29.315 "dma_device_id": "system", 00:14:29.315 "dma_device_type": 1 00:14:29.315 }, 00:14:29.315 { 00:14:29.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.315 "dma_device_type": 2 00:14:29.315 } 00:14:29.315 ], 00:14:29.315 "driver_specific": {} 00:14:29.315 } 00:14:29.315 ] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.315 [2024-11-26 21:21:47.447935] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.315 [2024-11-26 21:21:47.448070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.315 [2024-11-26 21:21:47.448121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.315 [2024-11-26 21:21:47.450093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.315 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.575 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.575 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.575 "name": "Existed_Raid", 00:14:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.575 "strip_size_kb": 64, 00:14:29.575 "state": "configuring", 00:14:29.575 "raid_level": "raid5f", 00:14:29.575 "superblock": false, 00:14:29.575 "num_base_bdevs": 3, 00:14:29.575 "num_base_bdevs_discovered": 2, 00:14:29.575 "num_base_bdevs_operational": 3, 00:14:29.575 "base_bdevs_list": [ 00:14:29.575 { 00:14:29.575 "name": "BaseBdev1", 00:14:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.575 "is_configured": false, 00:14:29.575 "data_offset": 0, 00:14:29.575 "data_size": 0 00:14:29.575 }, 00:14:29.575 { 00:14:29.575 "name": "BaseBdev2", 00:14:29.575 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:29.575 "is_configured": true, 00:14:29.575 "data_offset": 0, 00:14:29.575 "data_size": 65536 00:14:29.575 }, 00:14:29.575 { 00:14:29.575 "name": "BaseBdev3", 00:14:29.575 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:29.575 "is_configured": true, 00:14:29.575 "data_offset": 0, 00:14:29.575 "data_size": 65536 00:14:29.575 } 00:14:29.575 ] 00:14:29.575 }' 00:14:29.575 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.575 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.835 [2024-11-26 21:21:47.879142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.835 "name": "Existed_Raid", 00:14:29.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.835 "strip_size_kb": 64, 00:14:29.835 "state": "configuring", 00:14:29.835 "raid_level": "raid5f", 00:14:29.835 "superblock": false, 00:14:29.835 "num_base_bdevs": 3, 00:14:29.835 "num_base_bdevs_discovered": 1, 00:14:29.835 "num_base_bdevs_operational": 3, 00:14:29.835 "base_bdevs_list": [ 00:14:29.835 { 00:14:29.835 "name": "BaseBdev1", 00:14:29.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.835 "is_configured": false, 00:14:29.835 "data_offset": 0, 00:14:29.835 "data_size": 0 00:14:29.835 }, 00:14:29.835 { 00:14:29.835 "name": null, 00:14:29.835 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:29.835 "is_configured": false, 00:14:29.835 "data_offset": 0, 00:14:29.835 "data_size": 65536 00:14:29.835 }, 00:14:29.835 { 00:14:29.835 "name": "BaseBdev3", 00:14:29.835 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:29.835 "is_configured": true, 00:14:29.835 "data_offset": 0, 00:14:29.835 "data_size": 65536 00:14:29.835 } 00:14:29.835 ] 00:14:29.835 }' 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.835 21:21:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.406 [2024-11-26 21:21:48.363105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.406 BaseBdev1 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.406 [ 00:14:30.406 { 00:14:30.406 "name": "BaseBdev1", 00:14:30.406 "aliases": [ 00:14:30.406 "fa4fca2d-75a3-42bc-b625-66937dfc5700" 00:14:30.406 ], 00:14:30.406 "product_name": "Malloc disk", 00:14:30.406 "block_size": 512, 00:14:30.406 "num_blocks": 65536, 00:14:30.406 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:30.406 "assigned_rate_limits": { 00:14:30.406 "rw_ios_per_sec": 0, 00:14:30.406 "rw_mbytes_per_sec": 0, 00:14:30.406 "r_mbytes_per_sec": 0, 00:14:30.406 "w_mbytes_per_sec": 0 00:14:30.406 }, 00:14:30.406 "claimed": true, 00:14:30.406 "claim_type": "exclusive_write", 00:14:30.406 "zoned": false, 00:14:30.406 "supported_io_types": { 00:14:30.406 "read": true, 00:14:30.406 "write": true, 00:14:30.406 "unmap": true, 00:14:30.406 "flush": true, 00:14:30.406 "reset": true, 00:14:30.406 "nvme_admin": false, 00:14:30.406 "nvme_io": false, 00:14:30.406 "nvme_io_md": false, 00:14:30.406 "write_zeroes": true, 00:14:30.406 "zcopy": true, 00:14:30.406 "get_zone_info": false, 00:14:30.406 "zone_management": false, 00:14:30.406 "zone_append": false, 00:14:30.406 "compare": false, 00:14:30.406 "compare_and_write": false, 00:14:30.406 "abort": true, 00:14:30.406 "seek_hole": false, 00:14:30.406 "seek_data": false, 00:14:30.406 "copy": true, 00:14:30.406 "nvme_iov_md": false 00:14:30.406 }, 00:14:30.406 "memory_domains": [ 00:14:30.406 { 00:14:30.406 "dma_device_id": "system", 00:14:30.406 "dma_device_type": 1 00:14:30.406 }, 00:14:30.406 { 00:14:30.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.406 "dma_device_type": 2 00:14:30.406 } 00:14:30.406 ], 00:14:30.406 "driver_specific": {} 00:14:30.406 } 00:14:30.406 ] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.406 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.406 "name": "Existed_Raid", 00:14:30.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.406 "strip_size_kb": 64, 00:14:30.406 "state": "configuring", 00:14:30.406 "raid_level": "raid5f", 00:14:30.406 "superblock": false, 00:14:30.406 "num_base_bdevs": 3, 00:14:30.406 "num_base_bdevs_discovered": 2, 00:14:30.406 "num_base_bdevs_operational": 3, 00:14:30.406 "base_bdevs_list": [ 00:14:30.406 { 00:14:30.406 "name": "BaseBdev1", 00:14:30.406 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:30.406 "is_configured": true, 00:14:30.406 "data_offset": 0, 00:14:30.406 "data_size": 65536 00:14:30.406 }, 00:14:30.406 { 00:14:30.406 "name": null, 00:14:30.406 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:30.406 "is_configured": false, 00:14:30.406 "data_offset": 0, 00:14:30.406 "data_size": 65536 00:14:30.407 }, 00:14:30.407 { 00:14:30.407 "name": "BaseBdev3", 00:14:30.407 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:30.407 "is_configured": true, 00:14:30.407 "data_offset": 0, 00:14:30.407 "data_size": 65536 00:14:30.407 } 00:14:30.407 ] 00:14:30.407 }' 00:14:30.407 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.407 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.022 [2024-11-26 21:21:48.938192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.022 "name": "Existed_Raid", 00:14:31.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.022 "strip_size_kb": 64, 00:14:31.022 "state": "configuring", 00:14:31.022 "raid_level": "raid5f", 00:14:31.022 "superblock": false, 00:14:31.022 "num_base_bdevs": 3, 00:14:31.022 "num_base_bdevs_discovered": 1, 00:14:31.022 "num_base_bdevs_operational": 3, 00:14:31.022 "base_bdevs_list": [ 00:14:31.022 { 00:14:31.022 "name": "BaseBdev1", 00:14:31.022 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:31.022 "is_configured": true, 00:14:31.022 "data_offset": 0, 00:14:31.022 "data_size": 65536 00:14:31.022 }, 00:14:31.022 { 00:14:31.022 "name": null, 00:14:31.022 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:31.022 "is_configured": false, 00:14:31.022 "data_offset": 0, 00:14:31.022 "data_size": 65536 00:14:31.022 }, 00:14:31.022 { 00:14:31.022 "name": null, 00:14:31.022 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:31.022 "is_configured": false, 00:14:31.022 "data_offset": 0, 00:14:31.022 "data_size": 65536 00:14:31.022 } 00:14:31.022 ] 00:14:31.022 }' 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.022 21:21:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.291 [2024-11-26 21:21:49.409425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.291 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.550 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.550 "name": "Existed_Raid", 00:14:31.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.550 "strip_size_kb": 64, 00:14:31.550 "state": "configuring", 00:14:31.550 "raid_level": "raid5f", 00:14:31.550 "superblock": false, 00:14:31.550 "num_base_bdevs": 3, 00:14:31.550 "num_base_bdevs_discovered": 2, 00:14:31.550 "num_base_bdevs_operational": 3, 00:14:31.550 "base_bdevs_list": [ 00:14:31.550 { 00:14:31.550 "name": "BaseBdev1", 00:14:31.550 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:31.550 "is_configured": true, 00:14:31.550 "data_offset": 0, 00:14:31.550 "data_size": 65536 00:14:31.550 }, 00:14:31.550 { 00:14:31.550 "name": null, 00:14:31.550 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:31.550 "is_configured": false, 00:14:31.550 "data_offset": 0, 00:14:31.550 "data_size": 65536 00:14:31.550 }, 00:14:31.550 { 00:14:31.550 "name": "BaseBdev3", 00:14:31.550 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:31.550 "is_configured": true, 00:14:31.550 "data_offset": 0, 00:14:31.550 "data_size": 65536 00:14:31.550 } 00:14:31.550 ] 00:14:31.550 }' 00:14:31.550 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.550 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.810 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.810 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.811 [2024-11-26 21:21:49.856657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.811 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.071 21:21:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.071 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.071 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.071 21:21:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.071 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.071 "name": "Existed_Raid", 00:14:32.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.071 "strip_size_kb": 64, 00:14:32.071 "state": "configuring", 00:14:32.071 "raid_level": "raid5f", 00:14:32.071 "superblock": false, 00:14:32.071 "num_base_bdevs": 3, 00:14:32.071 "num_base_bdevs_discovered": 1, 00:14:32.071 "num_base_bdevs_operational": 3, 00:14:32.071 "base_bdevs_list": [ 00:14:32.071 { 00:14:32.071 "name": null, 00:14:32.071 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:32.071 "is_configured": false, 00:14:32.071 "data_offset": 0, 00:14:32.071 "data_size": 65536 00:14:32.071 }, 00:14:32.071 { 00:14:32.071 "name": null, 00:14:32.071 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:32.071 "is_configured": false, 00:14:32.071 "data_offset": 0, 00:14:32.071 "data_size": 65536 00:14:32.071 }, 00:14:32.071 { 00:14:32.071 "name": "BaseBdev3", 00:14:32.071 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:32.071 "is_configured": true, 00:14:32.071 "data_offset": 0, 00:14:32.071 "data_size": 65536 00:14:32.071 } 00:14:32.071 ] 00:14:32.071 }' 00:14:32.071 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.071 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.331 [2024-11-26 21:21:50.450509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.331 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.591 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.591 "name": "Existed_Raid", 00:14:32.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.591 "strip_size_kb": 64, 00:14:32.591 "state": "configuring", 00:14:32.591 "raid_level": "raid5f", 00:14:32.591 "superblock": false, 00:14:32.591 "num_base_bdevs": 3, 00:14:32.591 "num_base_bdevs_discovered": 2, 00:14:32.591 "num_base_bdevs_operational": 3, 00:14:32.591 "base_bdevs_list": [ 00:14:32.591 { 00:14:32.591 "name": null, 00:14:32.591 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:32.591 "is_configured": false, 00:14:32.591 "data_offset": 0, 00:14:32.591 "data_size": 65536 00:14:32.591 }, 00:14:32.591 { 00:14:32.591 "name": "BaseBdev2", 00:14:32.591 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:32.591 "is_configured": true, 00:14:32.591 "data_offset": 0, 00:14:32.591 "data_size": 65536 00:14:32.591 }, 00:14:32.591 { 00:14:32.591 "name": "BaseBdev3", 00:14:32.591 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:32.591 "is_configured": true, 00:14:32.591 "data_offset": 0, 00:14:32.591 "data_size": 65536 00:14:32.591 } 00:14:32.591 ] 00:14:32.591 }' 00:14:32.591 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.591 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa4fca2d-75a3-42bc-b625-66937dfc5700 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.851 [2024-11-26 21:21:50.991228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:32.851 [2024-11-26 21:21:50.991341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:32.851 [2024-11-26 21:21:50.991368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:32.851 [2024-11-26 21:21:50.991655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:32.851 [2024-11-26 21:21:50.996549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:32.851 [2024-11-26 21:21:50.996582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:32.851 [2024-11-26 21:21:50.996838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.851 NewBaseBdev 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.851 21:21:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.112 [ 00:14:33.112 { 00:14:33.112 "name": "NewBaseBdev", 00:14:33.112 "aliases": [ 00:14:33.112 "fa4fca2d-75a3-42bc-b625-66937dfc5700" 00:14:33.112 ], 00:14:33.112 "product_name": "Malloc disk", 00:14:33.112 "block_size": 512, 00:14:33.112 "num_blocks": 65536, 00:14:33.112 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:33.112 "assigned_rate_limits": { 00:14:33.112 "rw_ios_per_sec": 0, 00:14:33.112 "rw_mbytes_per_sec": 0, 00:14:33.112 "r_mbytes_per_sec": 0, 00:14:33.112 "w_mbytes_per_sec": 0 00:14:33.112 }, 00:14:33.112 "claimed": true, 00:14:33.112 "claim_type": "exclusive_write", 00:14:33.112 "zoned": false, 00:14:33.112 "supported_io_types": { 00:14:33.112 "read": true, 00:14:33.112 "write": true, 00:14:33.112 "unmap": true, 00:14:33.112 "flush": true, 00:14:33.112 "reset": true, 00:14:33.112 "nvme_admin": false, 00:14:33.112 "nvme_io": false, 00:14:33.112 "nvme_io_md": false, 00:14:33.112 "write_zeroes": true, 00:14:33.112 "zcopy": true, 00:14:33.112 "get_zone_info": false, 00:14:33.112 "zone_management": false, 00:14:33.112 "zone_append": false, 00:14:33.112 "compare": false, 00:14:33.112 "compare_and_write": false, 00:14:33.112 "abort": true, 00:14:33.112 "seek_hole": false, 00:14:33.112 "seek_data": false, 00:14:33.112 "copy": true, 00:14:33.112 "nvme_iov_md": false 00:14:33.112 }, 00:14:33.112 "memory_domains": [ 00:14:33.112 { 00:14:33.112 "dma_device_id": "system", 00:14:33.112 "dma_device_type": 1 00:14:33.112 }, 00:14:33.112 { 00:14:33.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.112 "dma_device_type": 2 00:14:33.112 } 00:14:33.112 ], 00:14:33.112 "driver_specific": {} 00:14:33.112 } 00:14:33.112 ] 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.112 "name": "Existed_Raid", 00:14:33.112 "uuid": "cae1cd22-a5ad-4881-afdc-ef11f73000b6", 00:14:33.112 "strip_size_kb": 64, 00:14:33.112 "state": "online", 00:14:33.112 "raid_level": "raid5f", 00:14:33.112 "superblock": false, 00:14:33.112 "num_base_bdevs": 3, 00:14:33.112 "num_base_bdevs_discovered": 3, 00:14:33.112 "num_base_bdevs_operational": 3, 00:14:33.112 "base_bdevs_list": [ 00:14:33.112 { 00:14:33.112 "name": "NewBaseBdev", 00:14:33.112 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:33.112 "is_configured": true, 00:14:33.112 "data_offset": 0, 00:14:33.112 "data_size": 65536 00:14:33.112 }, 00:14:33.112 { 00:14:33.112 "name": "BaseBdev2", 00:14:33.112 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:33.112 "is_configured": true, 00:14:33.112 "data_offset": 0, 00:14:33.112 "data_size": 65536 00:14:33.112 }, 00:14:33.112 { 00:14:33.112 "name": "BaseBdev3", 00:14:33.112 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:33.112 "is_configured": true, 00:14:33.112 "data_offset": 0, 00:14:33.112 "data_size": 65536 00:14:33.112 } 00:14:33.112 ] 00:14:33.112 }' 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.112 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.373 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.373 [2024-11-26 21:21:51.515150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.634 "name": "Existed_Raid", 00:14:33.634 "aliases": [ 00:14:33.634 "cae1cd22-a5ad-4881-afdc-ef11f73000b6" 00:14:33.634 ], 00:14:33.634 "product_name": "Raid Volume", 00:14:33.634 "block_size": 512, 00:14:33.634 "num_blocks": 131072, 00:14:33.634 "uuid": "cae1cd22-a5ad-4881-afdc-ef11f73000b6", 00:14:33.634 "assigned_rate_limits": { 00:14:33.634 "rw_ios_per_sec": 0, 00:14:33.634 "rw_mbytes_per_sec": 0, 00:14:33.634 "r_mbytes_per_sec": 0, 00:14:33.634 "w_mbytes_per_sec": 0 00:14:33.634 }, 00:14:33.634 "claimed": false, 00:14:33.634 "zoned": false, 00:14:33.634 "supported_io_types": { 00:14:33.634 "read": true, 00:14:33.634 "write": true, 00:14:33.634 "unmap": false, 00:14:33.634 "flush": false, 00:14:33.634 "reset": true, 00:14:33.634 "nvme_admin": false, 00:14:33.634 "nvme_io": false, 00:14:33.634 "nvme_io_md": false, 00:14:33.634 "write_zeroes": true, 00:14:33.634 "zcopy": false, 00:14:33.634 "get_zone_info": false, 00:14:33.634 "zone_management": false, 00:14:33.634 "zone_append": false, 00:14:33.634 "compare": false, 00:14:33.634 "compare_and_write": false, 00:14:33.634 "abort": false, 00:14:33.634 "seek_hole": false, 00:14:33.634 "seek_data": false, 00:14:33.634 "copy": false, 00:14:33.634 "nvme_iov_md": false 00:14:33.634 }, 00:14:33.634 "driver_specific": { 00:14:33.634 "raid": { 00:14:33.634 "uuid": "cae1cd22-a5ad-4881-afdc-ef11f73000b6", 00:14:33.634 "strip_size_kb": 64, 00:14:33.634 "state": "online", 00:14:33.634 "raid_level": "raid5f", 00:14:33.634 "superblock": false, 00:14:33.634 "num_base_bdevs": 3, 00:14:33.634 "num_base_bdevs_discovered": 3, 00:14:33.634 "num_base_bdevs_operational": 3, 00:14:33.634 "base_bdevs_list": [ 00:14:33.634 { 00:14:33.634 "name": "NewBaseBdev", 00:14:33.634 "uuid": "fa4fca2d-75a3-42bc-b625-66937dfc5700", 00:14:33.634 "is_configured": true, 00:14:33.634 "data_offset": 0, 00:14:33.634 "data_size": 65536 00:14:33.634 }, 00:14:33.634 { 00:14:33.634 "name": "BaseBdev2", 00:14:33.634 "uuid": "5e74a42e-ba17-43bb-b4d1-a9bf886a0588", 00:14:33.634 "is_configured": true, 00:14:33.634 "data_offset": 0, 00:14:33.634 "data_size": 65536 00:14:33.634 }, 00:14:33.634 { 00:14:33.634 "name": "BaseBdev3", 00:14:33.634 "uuid": "a01b8cf3-3125-40fc-bc0d-34fd2df25c64", 00:14:33.634 "is_configured": true, 00:14:33.634 "data_offset": 0, 00:14:33.634 "data_size": 65536 00:14:33.634 } 00:14:33.634 ] 00:14:33.634 } 00:14:33.634 } 00:14:33.634 }' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:33.634 BaseBdev2 00:14:33.634 BaseBdev3' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.634 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.895 [2024-11-26 21:21:51.806464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.895 [2024-11-26 21:21:51.806487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.895 [2024-11-26 21:21:51.806550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.895 [2024-11-26 21:21:51.806824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.895 [2024-11-26 21:21:51.806836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79687 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79687 ']' 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79687 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79687 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79687' 00:14:33.895 killing process with pid 79687 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79687 00:14:33.895 [2024-11-26 21:21:51.857751] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.895 21:21:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79687 00:14:34.154 [2024-11-26 21:21:52.167875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.538 ************************************ 00:14:35.538 END TEST raid5f_state_function_test 00:14:35.538 ************************************ 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:35.538 00:14:35.538 real 0m10.674s 00:14:35.538 user 0m16.701s 00:14:35.538 sys 0m2.066s 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 21:21:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:35.538 21:21:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:35.538 21:21:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.538 21:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 ************************************ 00:14:35.538 START TEST raid5f_state_function_test_sb 00:14:35.538 ************************************ 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:35.538 Process raid pid: 80308 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80308 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80308' 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80308 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80308 ']' 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.538 21:21:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.538 [2024-11-26 21:21:53.507263] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:35.538 [2024-11-26 21:21:53.507441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.538 [2024-11-26 21:21:53.680482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.798 [2024-11-26 21:21:53.804495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.058 [2024-11-26 21:21:54.042108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.058 [2024-11-26 21:21:54.042231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.318 [2024-11-26 21:21:54.324761] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.318 [2024-11-26 21:21:54.324905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.318 [2024-11-26 21:21:54.324935] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.318 [2024-11-26 21:21:54.324970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.318 [2024-11-26 21:21:54.324994] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.318 [2024-11-26 21:21:54.325015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.318 "name": "Existed_Raid", 00:14:36.318 "uuid": "b69d1f67-b5d1-4e0d-80ba-c5b69f40d2c4", 00:14:36.318 "strip_size_kb": 64, 00:14:36.318 "state": "configuring", 00:14:36.318 "raid_level": "raid5f", 00:14:36.318 "superblock": true, 00:14:36.318 "num_base_bdevs": 3, 00:14:36.318 "num_base_bdevs_discovered": 0, 00:14:36.318 "num_base_bdevs_operational": 3, 00:14:36.318 "base_bdevs_list": [ 00:14:36.318 { 00:14:36.318 "name": "BaseBdev1", 00:14:36.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.318 "is_configured": false, 00:14:36.318 "data_offset": 0, 00:14:36.318 "data_size": 0 00:14:36.318 }, 00:14:36.318 { 00:14:36.318 "name": "BaseBdev2", 00:14:36.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.318 "is_configured": false, 00:14:36.318 "data_offset": 0, 00:14:36.318 "data_size": 0 00:14:36.318 }, 00:14:36.318 { 00:14:36.318 "name": "BaseBdev3", 00:14:36.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.318 "is_configured": false, 00:14:36.318 "data_offset": 0, 00:14:36.318 "data_size": 0 00:14:36.318 } 00:14:36.318 ] 00:14:36.318 }' 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.318 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.888 [2024-11-26 21:21:54.768010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.888 [2024-11-26 21:21:54.768084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.888 [2024-11-26 21:21:54.780008] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.888 [2024-11-26 21:21:54.780048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.888 [2024-11-26 21:21:54.780056] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.888 [2024-11-26 21:21:54.780065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.888 [2024-11-26 21:21:54.780071] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.888 [2024-11-26 21:21:54.780080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.888 [2024-11-26 21:21:54.834237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.888 BaseBdev1 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.888 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.888 [ 00:14:36.888 { 00:14:36.888 "name": "BaseBdev1", 00:14:36.888 "aliases": [ 00:14:36.888 "4ffcd30d-d22a-4a52-b78c-f235891ca55b" 00:14:36.888 ], 00:14:36.888 "product_name": "Malloc disk", 00:14:36.888 "block_size": 512, 00:14:36.888 "num_blocks": 65536, 00:14:36.888 "uuid": "4ffcd30d-d22a-4a52-b78c-f235891ca55b", 00:14:36.888 "assigned_rate_limits": { 00:14:36.888 "rw_ios_per_sec": 0, 00:14:36.888 "rw_mbytes_per_sec": 0, 00:14:36.888 "r_mbytes_per_sec": 0, 00:14:36.888 "w_mbytes_per_sec": 0 00:14:36.888 }, 00:14:36.888 "claimed": true, 00:14:36.888 "claim_type": "exclusive_write", 00:14:36.888 "zoned": false, 00:14:36.888 "supported_io_types": { 00:14:36.888 "read": true, 00:14:36.888 "write": true, 00:14:36.888 "unmap": true, 00:14:36.888 "flush": true, 00:14:36.888 "reset": true, 00:14:36.888 "nvme_admin": false, 00:14:36.888 "nvme_io": false, 00:14:36.888 "nvme_io_md": false, 00:14:36.888 "write_zeroes": true, 00:14:36.888 "zcopy": true, 00:14:36.888 "get_zone_info": false, 00:14:36.888 "zone_management": false, 00:14:36.888 "zone_append": false, 00:14:36.888 "compare": false, 00:14:36.888 "compare_and_write": false, 00:14:36.888 "abort": true, 00:14:36.888 "seek_hole": false, 00:14:36.888 "seek_data": false, 00:14:36.888 "copy": true, 00:14:36.888 "nvme_iov_md": false 00:14:36.888 }, 00:14:36.888 "memory_domains": [ 00:14:36.888 { 00:14:36.888 "dma_device_id": "system", 00:14:36.888 "dma_device_type": 1 00:14:36.888 }, 00:14:36.888 { 00:14:36.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.888 "dma_device_type": 2 00:14:36.888 } 00:14:36.888 ], 00:14:36.889 "driver_specific": {} 00:14:36.889 } 00:14:36.889 ] 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.889 "name": "Existed_Raid", 00:14:36.889 "uuid": "84fa1c83-2375-4d1c-9eb3-3a312d12e917", 00:14:36.889 "strip_size_kb": 64, 00:14:36.889 "state": "configuring", 00:14:36.889 "raid_level": "raid5f", 00:14:36.889 "superblock": true, 00:14:36.889 "num_base_bdevs": 3, 00:14:36.889 "num_base_bdevs_discovered": 1, 00:14:36.889 "num_base_bdevs_operational": 3, 00:14:36.889 "base_bdevs_list": [ 00:14:36.889 { 00:14:36.889 "name": "BaseBdev1", 00:14:36.889 "uuid": "4ffcd30d-d22a-4a52-b78c-f235891ca55b", 00:14:36.889 "is_configured": true, 00:14:36.889 "data_offset": 2048, 00:14:36.889 "data_size": 63488 00:14:36.889 }, 00:14:36.889 { 00:14:36.889 "name": "BaseBdev2", 00:14:36.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.889 "is_configured": false, 00:14:36.889 "data_offset": 0, 00:14:36.889 "data_size": 0 00:14:36.889 }, 00:14:36.889 { 00:14:36.889 "name": "BaseBdev3", 00:14:36.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.889 "is_configured": false, 00:14:36.889 "data_offset": 0, 00:14:36.889 "data_size": 0 00:14:36.889 } 00:14:36.889 ] 00:14:36.889 }' 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.889 21:21:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.459 [2024-11-26 21:21:55.325475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.459 [2024-11-26 21:21:55.325576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.459 [2024-11-26 21:21:55.333523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.459 [2024-11-26 21:21:55.335472] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.459 [2024-11-26 21:21:55.335509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.459 [2024-11-26 21:21:55.335519] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.459 [2024-11-26 21:21:55.335527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.459 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.460 "name": "Existed_Raid", 00:14:37.460 "uuid": "8dbcf8f8-0324-44ea-bd49-264015300bf9", 00:14:37.460 "strip_size_kb": 64, 00:14:37.460 "state": "configuring", 00:14:37.460 "raid_level": "raid5f", 00:14:37.460 "superblock": true, 00:14:37.460 "num_base_bdevs": 3, 00:14:37.460 "num_base_bdevs_discovered": 1, 00:14:37.460 "num_base_bdevs_operational": 3, 00:14:37.460 "base_bdevs_list": [ 00:14:37.460 { 00:14:37.460 "name": "BaseBdev1", 00:14:37.460 "uuid": "4ffcd30d-d22a-4a52-b78c-f235891ca55b", 00:14:37.460 "is_configured": true, 00:14:37.460 "data_offset": 2048, 00:14:37.460 "data_size": 63488 00:14:37.460 }, 00:14:37.460 { 00:14:37.460 "name": "BaseBdev2", 00:14:37.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.460 "is_configured": false, 00:14:37.460 "data_offset": 0, 00:14:37.460 "data_size": 0 00:14:37.460 }, 00:14:37.460 { 00:14:37.460 "name": "BaseBdev3", 00:14:37.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.460 "is_configured": false, 00:14:37.460 "data_offset": 0, 00:14:37.460 "data_size": 0 00:14:37.460 } 00:14:37.460 ] 00:14:37.460 }' 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.460 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.720 [2024-11-26 21:21:55.838890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.720 BaseBdev2 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.720 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.721 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.721 [ 00:14:37.721 { 00:14:37.721 "name": "BaseBdev2", 00:14:37.721 "aliases": [ 00:14:37.721 "489d5d09-2b0a-4e0b-bc0b-bbedbd6761a2" 00:14:37.721 ], 00:14:37.721 "product_name": "Malloc disk", 00:14:37.721 "block_size": 512, 00:14:37.721 "num_blocks": 65536, 00:14:37.721 "uuid": "489d5d09-2b0a-4e0b-bc0b-bbedbd6761a2", 00:14:37.721 "assigned_rate_limits": { 00:14:37.721 "rw_ios_per_sec": 0, 00:14:37.721 "rw_mbytes_per_sec": 0, 00:14:37.721 "r_mbytes_per_sec": 0, 00:14:37.721 "w_mbytes_per_sec": 0 00:14:37.721 }, 00:14:37.721 "claimed": true, 00:14:37.721 "claim_type": "exclusive_write", 00:14:37.721 "zoned": false, 00:14:37.721 "supported_io_types": { 00:14:37.721 "read": true, 00:14:37.721 "write": true, 00:14:37.721 "unmap": true, 00:14:37.721 "flush": true, 00:14:37.721 "reset": true, 00:14:37.721 "nvme_admin": false, 00:14:37.721 "nvme_io": false, 00:14:37.721 "nvme_io_md": false, 00:14:37.721 "write_zeroes": true, 00:14:37.721 "zcopy": true, 00:14:37.721 "get_zone_info": false, 00:14:37.721 "zone_management": false, 00:14:37.981 "zone_append": false, 00:14:37.981 "compare": false, 00:14:37.981 "compare_and_write": false, 00:14:37.981 "abort": true, 00:14:37.981 "seek_hole": false, 00:14:37.981 "seek_data": false, 00:14:37.981 "copy": true, 00:14:37.981 "nvme_iov_md": false 00:14:37.981 }, 00:14:37.981 "memory_domains": [ 00:14:37.981 { 00:14:37.981 "dma_device_id": "system", 00:14:37.981 "dma_device_type": 1 00:14:37.981 }, 00:14:37.981 { 00:14:37.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.981 "dma_device_type": 2 00:14:37.981 } 00:14:37.981 ], 00:14:37.981 "driver_specific": {} 00:14:37.981 } 00:14:37.981 ] 00:14:37.981 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.981 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:37.981 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.982 "name": "Existed_Raid", 00:14:37.982 "uuid": "8dbcf8f8-0324-44ea-bd49-264015300bf9", 00:14:37.982 "strip_size_kb": 64, 00:14:37.982 "state": "configuring", 00:14:37.982 "raid_level": "raid5f", 00:14:37.982 "superblock": true, 00:14:37.982 "num_base_bdevs": 3, 00:14:37.982 "num_base_bdevs_discovered": 2, 00:14:37.982 "num_base_bdevs_operational": 3, 00:14:37.982 "base_bdevs_list": [ 00:14:37.982 { 00:14:37.982 "name": "BaseBdev1", 00:14:37.982 "uuid": "4ffcd30d-d22a-4a52-b78c-f235891ca55b", 00:14:37.982 "is_configured": true, 00:14:37.982 "data_offset": 2048, 00:14:37.982 "data_size": 63488 00:14:37.982 }, 00:14:37.982 { 00:14:37.982 "name": "BaseBdev2", 00:14:37.982 "uuid": "489d5d09-2b0a-4e0b-bc0b-bbedbd6761a2", 00:14:37.982 "is_configured": true, 00:14:37.982 "data_offset": 2048, 00:14:37.982 "data_size": 63488 00:14:37.982 }, 00:14:37.982 { 00:14:37.982 "name": "BaseBdev3", 00:14:37.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.982 "is_configured": false, 00:14:37.982 "data_offset": 0, 00:14:37.982 "data_size": 0 00:14:37.982 } 00:14:37.982 ] 00:14:37.982 }' 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.982 21:21:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.242 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.242 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.242 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.502 [2024-11-26 21:21:56.406229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.502 [2024-11-26 21:21:56.406601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:38.502 [2024-11-26 21:21:56.406628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:38.502 [2024-11-26 21:21:56.406917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:38.502 BaseBdev3 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.502 [2024-11-26 21:21:56.412196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:38.502 [2024-11-26 21:21:56.412261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:38.502 [2024-11-26 21:21:56.412468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.502 [ 00:14:38.502 { 00:14:38.502 "name": "BaseBdev3", 00:14:38.502 "aliases": [ 00:14:38.502 "fb0e4c02-cc58-44b5-8fcb-e5a44ab7d753" 00:14:38.502 ], 00:14:38.502 "product_name": "Malloc disk", 00:14:38.502 "block_size": 512, 00:14:38.502 "num_blocks": 65536, 00:14:38.502 "uuid": "fb0e4c02-cc58-44b5-8fcb-e5a44ab7d753", 00:14:38.502 "assigned_rate_limits": { 00:14:38.502 "rw_ios_per_sec": 0, 00:14:38.502 "rw_mbytes_per_sec": 0, 00:14:38.502 "r_mbytes_per_sec": 0, 00:14:38.502 "w_mbytes_per_sec": 0 00:14:38.502 }, 00:14:38.502 "claimed": true, 00:14:38.502 "claim_type": "exclusive_write", 00:14:38.502 "zoned": false, 00:14:38.502 "supported_io_types": { 00:14:38.502 "read": true, 00:14:38.502 "write": true, 00:14:38.502 "unmap": true, 00:14:38.502 "flush": true, 00:14:38.502 "reset": true, 00:14:38.502 "nvme_admin": false, 00:14:38.502 "nvme_io": false, 00:14:38.502 "nvme_io_md": false, 00:14:38.502 "write_zeroes": true, 00:14:38.502 "zcopy": true, 00:14:38.502 "get_zone_info": false, 00:14:38.502 "zone_management": false, 00:14:38.502 "zone_append": false, 00:14:38.502 "compare": false, 00:14:38.502 "compare_and_write": false, 00:14:38.502 "abort": true, 00:14:38.502 "seek_hole": false, 00:14:38.502 "seek_data": false, 00:14:38.502 "copy": true, 00:14:38.502 "nvme_iov_md": false 00:14:38.502 }, 00:14:38.502 "memory_domains": [ 00:14:38.502 { 00:14:38.502 "dma_device_id": "system", 00:14:38.502 "dma_device_type": 1 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.502 "dma_device_type": 2 00:14:38.502 } 00:14:38.502 ], 00:14:38.502 "driver_specific": {} 00:14:38.502 } 00:14:38.502 ] 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.502 "name": "Existed_Raid", 00:14:38.502 "uuid": "8dbcf8f8-0324-44ea-bd49-264015300bf9", 00:14:38.502 "strip_size_kb": 64, 00:14:38.502 "state": "online", 00:14:38.502 "raid_level": "raid5f", 00:14:38.502 "superblock": true, 00:14:38.502 "num_base_bdevs": 3, 00:14:38.502 "num_base_bdevs_discovered": 3, 00:14:38.502 "num_base_bdevs_operational": 3, 00:14:38.502 "base_bdevs_list": [ 00:14:38.502 { 00:14:38.502 "name": "BaseBdev1", 00:14:38.502 "uuid": "4ffcd30d-d22a-4a52-b78c-f235891ca55b", 00:14:38.502 "is_configured": true, 00:14:38.502 "data_offset": 2048, 00:14:38.502 "data_size": 63488 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "name": "BaseBdev2", 00:14:38.502 "uuid": "489d5d09-2b0a-4e0b-bc0b-bbedbd6761a2", 00:14:38.502 "is_configured": true, 00:14:38.502 "data_offset": 2048, 00:14:38.502 "data_size": 63488 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "name": "BaseBdev3", 00:14:38.502 "uuid": "fb0e4c02-cc58-44b5-8fcb-e5a44ab7d753", 00:14:38.502 "is_configured": true, 00:14:38.502 "data_offset": 2048, 00:14:38.502 "data_size": 63488 00:14:38.502 } 00:14:38.502 ] 00:14:38.502 }' 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.502 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.762 [2024-11-26 21:21:56.894511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.762 21:21:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.022 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.022 "name": "Existed_Raid", 00:14:39.022 "aliases": [ 00:14:39.022 "8dbcf8f8-0324-44ea-bd49-264015300bf9" 00:14:39.022 ], 00:14:39.022 "product_name": "Raid Volume", 00:14:39.022 "block_size": 512, 00:14:39.022 "num_blocks": 126976, 00:14:39.022 "uuid": "8dbcf8f8-0324-44ea-bd49-264015300bf9", 00:14:39.022 "assigned_rate_limits": { 00:14:39.022 "rw_ios_per_sec": 0, 00:14:39.022 "rw_mbytes_per_sec": 0, 00:14:39.022 "r_mbytes_per_sec": 0, 00:14:39.022 "w_mbytes_per_sec": 0 00:14:39.022 }, 00:14:39.022 "claimed": false, 00:14:39.022 "zoned": false, 00:14:39.022 "supported_io_types": { 00:14:39.022 "read": true, 00:14:39.022 "write": true, 00:14:39.022 "unmap": false, 00:14:39.022 "flush": false, 00:14:39.022 "reset": true, 00:14:39.022 "nvme_admin": false, 00:14:39.022 "nvme_io": false, 00:14:39.022 "nvme_io_md": false, 00:14:39.022 "write_zeroes": true, 00:14:39.022 "zcopy": false, 00:14:39.022 "get_zone_info": false, 00:14:39.022 "zone_management": false, 00:14:39.022 "zone_append": false, 00:14:39.022 "compare": false, 00:14:39.022 "compare_and_write": false, 00:14:39.022 "abort": false, 00:14:39.022 "seek_hole": false, 00:14:39.022 "seek_data": false, 00:14:39.022 "copy": false, 00:14:39.022 "nvme_iov_md": false 00:14:39.022 }, 00:14:39.022 "driver_specific": { 00:14:39.022 "raid": { 00:14:39.022 "uuid": "8dbcf8f8-0324-44ea-bd49-264015300bf9", 00:14:39.022 "strip_size_kb": 64, 00:14:39.022 "state": "online", 00:14:39.022 "raid_level": "raid5f", 00:14:39.022 "superblock": true, 00:14:39.022 "num_base_bdevs": 3, 00:14:39.022 "num_base_bdevs_discovered": 3, 00:14:39.022 "num_base_bdevs_operational": 3, 00:14:39.022 "base_bdevs_list": [ 00:14:39.022 { 00:14:39.022 "name": "BaseBdev1", 00:14:39.022 "uuid": "4ffcd30d-d22a-4a52-b78c-f235891ca55b", 00:14:39.022 "is_configured": true, 00:14:39.022 "data_offset": 2048, 00:14:39.022 "data_size": 63488 00:14:39.022 }, 00:14:39.022 { 00:14:39.022 "name": "BaseBdev2", 00:14:39.022 "uuid": "489d5d09-2b0a-4e0b-bc0b-bbedbd6761a2", 00:14:39.022 "is_configured": true, 00:14:39.022 "data_offset": 2048, 00:14:39.022 "data_size": 63488 00:14:39.022 }, 00:14:39.022 { 00:14:39.022 "name": "BaseBdev3", 00:14:39.022 "uuid": "fb0e4c02-cc58-44b5-8fcb-e5a44ab7d753", 00:14:39.022 "is_configured": true, 00:14:39.022 "data_offset": 2048, 00:14:39.022 "data_size": 63488 00:14:39.022 } 00:14:39.022 ] 00:14:39.022 } 00:14:39.022 } 00:14:39.022 }' 00:14:39.022 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.022 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:39.022 BaseBdev2 00:14:39.022 BaseBdev3' 00:14:39.022 21:21:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.022 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.022 [2024-11-26 21:21:57.161917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.282 "name": "Existed_Raid", 00:14:39.282 "uuid": "8dbcf8f8-0324-44ea-bd49-264015300bf9", 00:14:39.282 "strip_size_kb": 64, 00:14:39.282 "state": "online", 00:14:39.282 "raid_level": "raid5f", 00:14:39.282 "superblock": true, 00:14:39.282 "num_base_bdevs": 3, 00:14:39.282 "num_base_bdevs_discovered": 2, 00:14:39.282 "num_base_bdevs_operational": 2, 00:14:39.282 "base_bdevs_list": [ 00:14:39.282 { 00:14:39.282 "name": null, 00:14:39.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.282 "is_configured": false, 00:14:39.282 "data_offset": 0, 00:14:39.282 "data_size": 63488 00:14:39.282 }, 00:14:39.282 { 00:14:39.282 "name": "BaseBdev2", 00:14:39.282 "uuid": "489d5d09-2b0a-4e0b-bc0b-bbedbd6761a2", 00:14:39.282 "is_configured": true, 00:14:39.282 "data_offset": 2048, 00:14:39.282 "data_size": 63488 00:14:39.282 }, 00:14:39.282 { 00:14:39.282 "name": "BaseBdev3", 00:14:39.282 "uuid": "fb0e4c02-cc58-44b5-8fcb-e5a44ab7d753", 00:14:39.282 "is_configured": true, 00:14:39.282 "data_offset": 2048, 00:14:39.282 "data_size": 63488 00:14:39.282 } 00:14:39.282 ] 00:14:39.282 }' 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.282 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.542 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.802 [2024-11-26 21:21:57.701332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.802 [2024-11-26 21:21:57.701523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.802 [2024-11-26 21:21:57.799061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.802 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.803 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.803 [2024-11-26 21:21:57.854985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:39.803 [2024-11-26 21:21:57.855033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:40.063 21:21:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.063 BaseBdev2 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.063 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.063 [ 00:14:40.063 { 00:14:40.063 "name": "BaseBdev2", 00:14:40.063 "aliases": [ 00:14:40.063 "b2dd102d-065e-40c4-b4ba-c9554c925af1" 00:14:40.063 ], 00:14:40.063 "product_name": "Malloc disk", 00:14:40.064 "block_size": 512, 00:14:40.064 "num_blocks": 65536, 00:14:40.064 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:40.064 "assigned_rate_limits": { 00:14:40.064 "rw_ios_per_sec": 0, 00:14:40.064 "rw_mbytes_per_sec": 0, 00:14:40.064 "r_mbytes_per_sec": 0, 00:14:40.064 "w_mbytes_per_sec": 0 00:14:40.064 }, 00:14:40.064 "claimed": false, 00:14:40.064 "zoned": false, 00:14:40.064 "supported_io_types": { 00:14:40.064 "read": true, 00:14:40.064 "write": true, 00:14:40.064 "unmap": true, 00:14:40.064 "flush": true, 00:14:40.064 "reset": true, 00:14:40.064 "nvme_admin": false, 00:14:40.064 "nvme_io": false, 00:14:40.064 "nvme_io_md": false, 00:14:40.064 "write_zeroes": true, 00:14:40.064 "zcopy": true, 00:14:40.064 "get_zone_info": false, 00:14:40.064 "zone_management": false, 00:14:40.064 "zone_append": false, 00:14:40.064 "compare": false, 00:14:40.064 "compare_and_write": false, 00:14:40.064 "abort": true, 00:14:40.064 "seek_hole": false, 00:14:40.064 "seek_data": false, 00:14:40.064 "copy": true, 00:14:40.064 "nvme_iov_md": false 00:14:40.064 }, 00:14:40.064 "memory_domains": [ 00:14:40.064 { 00:14:40.064 "dma_device_id": "system", 00:14:40.064 "dma_device_type": 1 00:14:40.064 }, 00:14:40.064 { 00:14:40.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.064 "dma_device_type": 2 00:14:40.064 } 00:14:40.064 ], 00:14:40.064 "driver_specific": {} 00:14:40.064 } 00:14:40.064 ] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.064 BaseBdev3 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.064 [ 00:14:40.064 { 00:14:40.064 "name": "BaseBdev3", 00:14:40.064 "aliases": [ 00:14:40.064 "03110350-514b-4ba7-b604-87b70b708a23" 00:14:40.064 ], 00:14:40.064 "product_name": "Malloc disk", 00:14:40.064 "block_size": 512, 00:14:40.064 "num_blocks": 65536, 00:14:40.064 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:40.064 "assigned_rate_limits": { 00:14:40.064 "rw_ios_per_sec": 0, 00:14:40.064 "rw_mbytes_per_sec": 0, 00:14:40.064 "r_mbytes_per_sec": 0, 00:14:40.064 "w_mbytes_per_sec": 0 00:14:40.064 }, 00:14:40.064 "claimed": false, 00:14:40.064 "zoned": false, 00:14:40.064 "supported_io_types": { 00:14:40.064 "read": true, 00:14:40.064 "write": true, 00:14:40.064 "unmap": true, 00:14:40.064 "flush": true, 00:14:40.064 "reset": true, 00:14:40.064 "nvme_admin": false, 00:14:40.064 "nvme_io": false, 00:14:40.064 "nvme_io_md": false, 00:14:40.064 "write_zeroes": true, 00:14:40.064 "zcopy": true, 00:14:40.064 "get_zone_info": false, 00:14:40.064 "zone_management": false, 00:14:40.064 "zone_append": false, 00:14:40.064 "compare": false, 00:14:40.064 "compare_and_write": false, 00:14:40.064 "abort": true, 00:14:40.064 "seek_hole": false, 00:14:40.064 "seek_data": false, 00:14:40.064 "copy": true, 00:14:40.064 "nvme_iov_md": false 00:14:40.064 }, 00:14:40.064 "memory_domains": [ 00:14:40.064 { 00:14:40.064 "dma_device_id": "system", 00:14:40.064 "dma_device_type": 1 00:14:40.064 }, 00:14:40.064 { 00:14:40.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.064 "dma_device_type": 2 00:14:40.064 } 00:14:40.064 ], 00:14:40.064 "driver_specific": {} 00:14:40.064 } 00:14:40.064 ] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.064 [2024-11-26 21:21:58.180766] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.064 [2024-11-26 21:21:58.180825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.064 [2024-11-26 21:21:58.180847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.064 [2024-11-26 21:21:58.182870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.064 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.324 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.324 "name": "Existed_Raid", 00:14:40.324 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:40.324 "strip_size_kb": 64, 00:14:40.324 "state": "configuring", 00:14:40.324 "raid_level": "raid5f", 00:14:40.324 "superblock": true, 00:14:40.324 "num_base_bdevs": 3, 00:14:40.324 "num_base_bdevs_discovered": 2, 00:14:40.324 "num_base_bdevs_operational": 3, 00:14:40.324 "base_bdevs_list": [ 00:14:40.324 { 00:14:40.324 "name": "BaseBdev1", 00:14:40.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.324 "is_configured": false, 00:14:40.324 "data_offset": 0, 00:14:40.324 "data_size": 0 00:14:40.324 }, 00:14:40.324 { 00:14:40.324 "name": "BaseBdev2", 00:14:40.324 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:40.324 "is_configured": true, 00:14:40.324 "data_offset": 2048, 00:14:40.324 "data_size": 63488 00:14:40.324 }, 00:14:40.324 { 00:14:40.324 "name": "BaseBdev3", 00:14:40.324 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:40.324 "is_configured": true, 00:14:40.324 "data_offset": 2048, 00:14:40.324 "data_size": 63488 00:14:40.324 } 00:14:40.324 ] 00:14:40.324 }' 00:14:40.324 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.324 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 [2024-11-26 21:21:58.604066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.584 "name": "Existed_Raid", 00:14:40.584 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:40.584 "strip_size_kb": 64, 00:14:40.584 "state": "configuring", 00:14:40.584 "raid_level": "raid5f", 00:14:40.584 "superblock": true, 00:14:40.584 "num_base_bdevs": 3, 00:14:40.584 "num_base_bdevs_discovered": 1, 00:14:40.584 "num_base_bdevs_operational": 3, 00:14:40.584 "base_bdevs_list": [ 00:14:40.584 { 00:14:40.584 "name": "BaseBdev1", 00:14:40.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.584 "is_configured": false, 00:14:40.584 "data_offset": 0, 00:14:40.584 "data_size": 0 00:14:40.584 }, 00:14:40.584 { 00:14:40.584 "name": null, 00:14:40.584 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:40.584 "is_configured": false, 00:14:40.584 "data_offset": 0, 00:14:40.584 "data_size": 63488 00:14:40.584 }, 00:14:40.584 { 00:14:40.584 "name": "BaseBdev3", 00:14:40.584 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:40.584 "is_configured": true, 00:14:40.584 "data_offset": 2048, 00:14:40.584 "data_size": 63488 00:14:40.584 } 00:14:40.584 ] 00:14:40.584 }' 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.584 21:21:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 [2024-11-26 21:21:59.115540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.155 BaseBdev1 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.155 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 [ 00:14:41.156 { 00:14:41.156 "name": "BaseBdev1", 00:14:41.156 "aliases": [ 00:14:41.156 "c7c3c307-7e57-4a6e-98ca-644d92f99eb3" 00:14:41.156 ], 00:14:41.156 "product_name": "Malloc disk", 00:14:41.156 "block_size": 512, 00:14:41.156 "num_blocks": 65536, 00:14:41.156 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:41.156 "assigned_rate_limits": { 00:14:41.156 "rw_ios_per_sec": 0, 00:14:41.156 "rw_mbytes_per_sec": 0, 00:14:41.156 "r_mbytes_per_sec": 0, 00:14:41.156 "w_mbytes_per_sec": 0 00:14:41.156 }, 00:14:41.156 "claimed": true, 00:14:41.156 "claim_type": "exclusive_write", 00:14:41.156 "zoned": false, 00:14:41.156 "supported_io_types": { 00:14:41.156 "read": true, 00:14:41.156 "write": true, 00:14:41.156 "unmap": true, 00:14:41.156 "flush": true, 00:14:41.156 "reset": true, 00:14:41.156 "nvme_admin": false, 00:14:41.156 "nvme_io": false, 00:14:41.156 "nvme_io_md": false, 00:14:41.156 "write_zeroes": true, 00:14:41.156 "zcopy": true, 00:14:41.156 "get_zone_info": false, 00:14:41.156 "zone_management": false, 00:14:41.156 "zone_append": false, 00:14:41.156 "compare": false, 00:14:41.156 "compare_and_write": false, 00:14:41.156 "abort": true, 00:14:41.156 "seek_hole": false, 00:14:41.156 "seek_data": false, 00:14:41.156 "copy": true, 00:14:41.156 "nvme_iov_md": false 00:14:41.156 }, 00:14:41.156 "memory_domains": [ 00:14:41.156 { 00:14:41.156 "dma_device_id": "system", 00:14:41.156 "dma_device_type": 1 00:14:41.156 }, 00:14:41.156 { 00:14:41.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.156 "dma_device_type": 2 00:14:41.156 } 00:14:41.156 ], 00:14:41.156 "driver_specific": {} 00:14:41.156 } 00:14:41.156 ] 00:14:41.156 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.156 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:41.156 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.156 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.156 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.157 "name": "Existed_Raid", 00:14:41.157 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:41.157 "strip_size_kb": 64, 00:14:41.157 "state": "configuring", 00:14:41.157 "raid_level": "raid5f", 00:14:41.157 "superblock": true, 00:14:41.157 "num_base_bdevs": 3, 00:14:41.157 "num_base_bdevs_discovered": 2, 00:14:41.157 "num_base_bdevs_operational": 3, 00:14:41.157 "base_bdevs_list": [ 00:14:41.157 { 00:14:41.157 "name": "BaseBdev1", 00:14:41.157 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:41.157 "is_configured": true, 00:14:41.157 "data_offset": 2048, 00:14:41.157 "data_size": 63488 00:14:41.157 }, 00:14:41.157 { 00:14:41.157 "name": null, 00:14:41.157 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:41.157 "is_configured": false, 00:14:41.157 "data_offset": 0, 00:14:41.157 "data_size": 63488 00:14:41.157 }, 00:14:41.157 { 00:14:41.157 "name": "BaseBdev3", 00:14:41.157 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:41.157 "is_configured": true, 00:14:41.157 "data_offset": 2048, 00:14:41.157 "data_size": 63488 00:14:41.157 } 00:14:41.157 ] 00:14:41.157 }' 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.157 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 [2024-11-26 21:21:59.670776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.732 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.733 "name": "Existed_Raid", 00:14:41.733 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:41.733 "strip_size_kb": 64, 00:14:41.733 "state": "configuring", 00:14:41.733 "raid_level": "raid5f", 00:14:41.733 "superblock": true, 00:14:41.733 "num_base_bdevs": 3, 00:14:41.733 "num_base_bdevs_discovered": 1, 00:14:41.733 "num_base_bdevs_operational": 3, 00:14:41.733 "base_bdevs_list": [ 00:14:41.733 { 00:14:41.733 "name": "BaseBdev1", 00:14:41.733 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:41.733 "is_configured": true, 00:14:41.733 "data_offset": 2048, 00:14:41.733 "data_size": 63488 00:14:41.733 }, 00:14:41.733 { 00:14:41.733 "name": null, 00:14:41.733 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:41.733 "is_configured": false, 00:14:41.733 "data_offset": 0, 00:14:41.733 "data_size": 63488 00:14:41.733 }, 00:14:41.733 { 00:14:41.733 "name": null, 00:14:41.733 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:41.733 "is_configured": false, 00:14:41.733 "data_offset": 0, 00:14:41.733 "data_size": 63488 00:14:41.733 } 00:14:41.733 ] 00:14:41.733 }' 00:14:41.733 21:21:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.733 21:21:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.005 [2024-11-26 21:22:00.145991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.005 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.380 "name": "Existed_Raid", 00:14:42.380 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:42.380 "strip_size_kb": 64, 00:14:42.380 "state": "configuring", 00:14:42.380 "raid_level": "raid5f", 00:14:42.380 "superblock": true, 00:14:42.380 "num_base_bdevs": 3, 00:14:42.380 "num_base_bdevs_discovered": 2, 00:14:42.380 "num_base_bdevs_operational": 3, 00:14:42.380 "base_bdevs_list": [ 00:14:42.380 { 00:14:42.380 "name": "BaseBdev1", 00:14:42.380 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:42.380 "is_configured": true, 00:14:42.380 "data_offset": 2048, 00:14:42.380 "data_size": 63488 00:14:42.380 }, 00:14:42.380 { 00:14:42.380 "name": null, 00:14:42.380 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:42.380 "is_configured": false, 00:14:42.380 "data_offset": 0, 00:14:42.380 "data_size": 63488 00:14:42.380 }, 00:14:42.380 { 00:14:42.380 "name": "BaseBdev3", 00:14:42.380 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:42.380 "is_configured": true, 00:14:42.380 "data_offset": 2048, 00:14:42.380 "data_size": 63488 00:14:42.380 } 00:14:42.380 ] 00:14:42.380 }' 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.380 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 [2024-11-26 21:22:00.661142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.641 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.900 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.900 "name": "Existed_Raid", 00:14:42.900 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:42.900 "strip_size_kb": 64, 00:14:42.900 "state": "configuring", 00:14:42.900 "raid_level": "raid5f", 00:14:42.900 "superblock": true, 00:14:42.900 "num_base_bdevs": 3, 00:14:42.900 "num_base_bdevs_discovered": 1, 00:14:42.900 "num_base_bdevs_operational": 3, 00:14:42.900 "base_bdevs_list": [ 00:14:42.900 { 00:14:42.900 "name": null, 00:14:42.900 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:42.900 "is_configured": false, 00:14:42.900 "data_offset": 0, 00:14:42.900 "data_size": 63488 00:14:42.900 }, 00:14:42.900 { 00:14:42.900 "name": null, 00:14:42.900 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:42.900 "is_configured": false, 00:14:42.900 "data_offset": 0, 00:14:42.900 "data_size": 63488 00:14:42.900 }, 00:14:42.900 { 00:14:42.900 "name": "BaseBdev3", 00:14:42.900 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:42.900 "is_configured": true, 00:14:42.900 "data_offset": 2048, 00:14:42.900 "data_size": 63488 00:14:42.900 } 00:14:42.900 ] 00:14:42.900 }' 00:14:42.900 21:22:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.900 21:22:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 [2024-11-26 21:22:01.243198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.159 "name": "Existed_Raid", 00:14:43.159 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:43.159 "strip_size_kb": 64, 00:14:43.159 "state": "configuring", 00:14:43.159 "raid_level": "raid5f", 00:14:43.159 "superblock": true, 00:14:43.159 "num_base_bdevs": 3, 00:14:43.159 "num_base_bdevs_discovered": 2, 00:14:43.159 "num_base_bdevs_operational": 3, 00:14:43.159 "base_bdevs_list": [ 00:14:43.159 { 00:14:43.159 "name": null, 00:14:43.159 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:43.159 "is_configured": false, 00:14:43.159 "data_offset": 0, 00:14:43.159 "data_size": 63488 00:14:43.159 }, 00:14:43.159 { 00:14:43.159 "name": "BaseBdev2", 00:14:43.159 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:43.159 "is_configured": true, 00:14:43.159 "data_offset": 2048, 00:14:43.159 "data_size": 63488 00:14:43.159 }, 00:14:43.159 { 00:14:43.159 "name": "BaseBdev3", 00:14:43.159 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:43.159 "is_configured": true, 00:14:43.159 "data_offset": 2048, 00:14:43.159 "data_size": 63488 00:14:43.159 } 00:14:43.159 ] 00:14:43.159 }' 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.159 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c7c3c307-7e57-4a6e-98ca-644d92f99eb3 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.728 [2024-11-26 21:22:01.828331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:43.728 [2024-11-26 21:22:01.828632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:43.728 [2024-11-26 21:22:01.828684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:43.728 [2024-11-26 21:22:01.828981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:43.728 NewBaseBdev 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.728 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.728 [2024-11-26 21:22:01.833808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:43.729 [2024-11-26 21:22:01.833867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:43.729 [2024-11-26 21:22:01.834083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.729 [ 00:14:43.729 { 00:14:43.729 "name": "NewBaseBdev", 00:14:43.729 "aliases": [ 00:14:43.729 "c7c3c307-7e57-4a6e-98ca-644d92f99eb3" 00:14:43.729 ], 00:14:43.729 "product_name": "Malloc disk", 00:14:43.729 "block_size": 512, 00:14:43.729 "num_blocks": 65536, 00:14:43.729 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:43.729 "assigned_rate_limits": { 00:14:43.729 "rw_ios_per_sec": 0, 00:14:43.729 "rw_mbytes_per_sec": 0, 00:14:43.729 "r_mbytes_per_sec": 0, 00:14:43.729 "w_mbytes_per_sec": 0 00:14:43.729 }, 00:14:43.729 "claimed": true, 00:14:43.729 "claim_type": "exclusive_write", 00:14:43.729 "zoned": false, 00:14:43.729 "supported_io_types": { 00:14:43.729 "read": true, 00:14:43.729 "write": true, 00:14:43.729 "unmap": true, 00:14:43.729 "flush": true, 00:14:43.729 "reset": true, 00:14:43.729 "nvme_admin": false, 00:14:43.729 "nvme_io": false, 00:14:43.729 "nvme_io_md": false, 00:14:43.729 "write_zeroes": true, 00:14:43.729 "zcopy": true, 00:14:43.729 "get_zone_info": false, 00:14:43.729 "zone_management": false, 00:14:43.729 "zone_append": false, 00:14:43.729 "compare": false, 00:14:43.729 "compare_and_write": false, 00:14:43.729 "abort": true, 00:14:43.729 "seek_hole": false, 00:14:43.729 "seek_data": false, 00:14:43.729 "copy": true, 00:14:43.729 "nvme_iov_md": false 00:14:43.729 }, 00:14:43.729 "memory_domains": [ 00:14:43.729 { 00:14:43.729 "dma_device_id": "system", 00:14:43.729 "dma_device_type": 1 00:14:43.729 }, 00:14:43.729 { 00:14:43.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.729 "dma_device_type": 2 00:14:43.729 } 00:14:43.729 ], 00:14:43.729 "driver_specific": {} 00:14:43.729 } 00:14:43.729 ] 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.729 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.988 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.988 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.988 "name": "Existed_Raid", 00:14:43.988 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:43.988 "strip_size_kb": 64, 00:14:43.988 "state": "online", 00:14:43.988 "raid_level": "raid5f", 00:14:43.988 "superblock": true, 00:14:43.988 "num_base_bdevs": 3, 00:14:43.988 "num_base_bdevs_discovered": 3, 00:14:43.988 "num_base_bdevs_operational": 3, 00:14:43.988 "base_bdevs_list": [ 00:14:43.988 { 00:14:43.988 "name": "NewBaseBdev", 00:14:43.988 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:43.988 "is_configured": true, 00:14:43.988 "data_offset": 2048, 00:14:43.988 "data_size": 63488 00:14:43.988 }, 00:14:43.988 { 00:14:43.988 "name": "BaseBdev2", 00:14:43.988 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:43.988 "is_configured": true, 00:14:43.988 "data_offset": 2048, 00:14:43.988 "data_size": 63488 00:14:43.988 }, 00:14:43.988 { 00:14:43.988 "name": "BaseBdev3", 00:14:43.988 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:43.988 "is_configured": true, 00:14:43.988 "data_offset": 2048, 00:14:43.988 "data_size": 63488 00:14:43.988 } 00:14:43.988 ] 00:14:43.988 }' 00:14:43.988 21:22:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.988 21:22:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.248 [2024-11-26 21:22:02.324055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.248 "name": "Existed_Raid", 00:14:44.248 "aliases": [ 00:14:44.248 "7edfeb21-9b6d-404e-a98c-442bdd36b044" 00:14:44.248 ], 00:14:44.248 "product_name": "Raid Volume", 00:14:44.248 "block_size": 512, 00:14:44.248 "num_blocks": 126976, 00:14:44.248 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:44.248 "assigned_rate_limits": { 00:14:44.248 "rw_ios_per_sec": 0, 00:14:44.248 "rw_mbytes_per_sec": 0, 00:14:44.248 "r_mbytes_per_sec": 0, 00:14:44.248 "w_mbytes_per_sec": 0 00:14:44.248 }, 00:14:44.248 "claimed": false, 00:14:44.248 "zoned": false, 00:14:44.248 "supported_io_types": { 00:14:44.248 "read": true, 00:14:44.248 "write": true, 00:14:44.248 "unmap": false, 00:14:44.248 "flush": false, 00:14:44.248 "reset": true, 00:14:44.248 "nvme_admin": false, 00:14:44.248 "nvme_io": false, 00:14:44.248 "nvme_io_md": false, 00:14:44.248 "write_zeroes": true, 00:14:44.248 "zcopy": false, 00:14:44.248 "get_zone_info": false, 00:14:44.248 "zone_management": false, 00:14:44.248 "zone_append": false, 00:14:44.248 "compare": false, 00:14:44.248 "compare_and_write": false, 00:14:44.248 "abort": false, 00:14:44.248 "seek_hole": false, 00:14:44.248 "seek_data": false, 00:14:44.248 "copy": false, 00:14:44.248 "nvme_iov_md": false 00:14:44.248 }, 00:14:44.248 "driver_specific": { 00:14:44.248 "raid": { 00:14:44.248 "uuid": "7edfeb21-9b6d-404e-a98c-442bdd36b044", 00:14:44.248 "strip_size_kb": 64, 00:14:44.248 "state": "online", 00:14:44.248 "raid_level": "raid5f", 00:14:44.248 "superblock": true, 00:14:44.248 "num_base_bdevs": 3, 00:14:44.248 "num_base_bdevs_discovered": 3, 00:14:44.248 "num_base_bdevs_operational": 3, 00:14:44.248 "base_bdevs_list": [ 00:14:44.248 { 00:14:44.248 "name": "NewBaseBdev", 00:14:44.248 "uuid": "c7c3c307-7e57-4a6e-98ca-644d92f99eb3", 00:14:44.248 "is_configured": true, 00:14:44.248 "data_offset": 2048, 00:14:44.248 "data_size": 63488 00:14:44.248 }, 00:14:44.248 { 00:14:44.248 "name": "BaseBdev2", 00:14:44.248 "uuid": "b2dd102d-065e-40c4-b4ba-c9554c925af1", 00:14:44.248 "is_configured": true, 00:14:44.248 "data_offset": 2048, 00:14:44.248 "data_size": 63488 00:14:44.248 }, 00:14:44.248 { 00:14:44.248 "name": "BaseBdev3", 00:14:44.248 "uuid": "03110350-514b-4ba7-b604-87b70b708a23", 00:14:44.248 "is_configured": true, 00:14:44.248 "data_offset": 2048, 00:14:44.248 "data_size": 63488 00:14:44.248 } 00:14:44.248 ] 00:14:44.248 } 00:14:44.248 } 00:14:44.248 }' 00:14:44.248 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:44.508 BaseBdev2 00:14:44.508 BaseBdev3' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.508 [2024-11-26 21:22:02.623351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.508 [2024-11-26 21:22:02.623374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.508 [2024-11-26 21:22:02.623434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.508 [2024-11-26 21:22:02.623715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.508 [2024-11-26 21:22:02.623728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80308 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80308 ']' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80308 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.508 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80308 00:14:44.768 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.768 killing process with pid 80308 00:14:44.768 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.768 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80308' 00:14:44.768 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80308 00:14:44.769 [2024-11-26 21:22:02.672735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.769 21:22:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80308 00:14:45.027 [2024-11-26 21:22:02.983316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.428 ************************************ 00:14:46.428 END TEST raid5f_state_function_test_sb 00:14:46.428 ************************************ 00:14:46.428 21:22:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:46.428 00:14:46.428 real 0m10.742s 00:14:46.428 user 0m16.856s 00:14:46.428 sys 0m2.035s 00:14:46.428 21:22:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.428 21:22:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.428 21:22:04 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:46.428 21:22:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:46.428 21:22:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.428 21:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.428 ************************************ 00:14:46.428 START TEST raid5f_superblock_test 00:14:46.428 ************************************ 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80934 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80934 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80934 ']' 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.428 21:22:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.428 [2024-11-26 21:22:04.313206] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:46.428 [2024-11-26 21:22:04.313842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80934 ] 00:14:46.428 [2024-11-26 21:22:04.487347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.687 [2024-11-26 21:22:04.618552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.947 [2024-11-26 21:22:04.847801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.947 [2024-11-26 21:22:04.847966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 malloc1 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 [2024-11-26 21:22:05.191702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:47.208 [2024-11-26 21:22:05.191853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.208 [2024-11-26 21:22:05.191896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.208 [2024-11-26 21:22:05.191908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.208 [2024-11-26 21:22:05.194209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.208 [2024-11-26 21:22:05.194245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:47.208 pt1 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 malloc2 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 [2024-11-26 21:22:05.254361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:47.208 [2024-11-26 21:22:05.254493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.208 [2024-11-26 21:22:05.254540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:47.208 [2024-11-26 21:22:05.254568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.208 [2024-11-26 21:22:05.256901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.208 [2024-11-26 21:22:05.256980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:47.208 pt2 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 malloc3 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.208 [2024-11-26 21:22:05.350443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:47.208 [2024-11-26 21:22:05.350551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.208 [2024-11-26 21:22:05.350590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:47.208 [2024-11-26 21:22:05.350618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.208 [2024-11-26 21:22:05.352923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.208 [2024-11-26 21:22:05.353013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:47.208 pt3 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.208 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.468 [2024-11-26 21:22:05.362494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:47.468 [2024-11-26 21:22:05.364681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:47.468 [2024-11-26 21:22:05.364806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:47.468 [2024-11-26 21:22:05.365032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:47.468 [2024-11-26 21:22:05.365091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:47.468 [2024-11-26 21:22:05.365355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:47.468 [2024-11-26 21:22:05.370633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:47.468 [2024-11-26 21:22:05.370684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:47.468 [2024-11-26 21:22:05.370895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.468 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.469 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.469 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.469 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.469 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.469 "name": "raid_bdev1", 00:14:47.469 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:47.469 "strip_size_kb": 64, 00:14:47.469 "state": "online", 00:14:47.469 "raid_level": "raid5f", 00:14:47.469 "superblock": true, 00:14:47.469 "num_base_bdevs": 3, 00:14:47.469 "num_base_bdevs_discovered": 3, 00:14:47.469 "num_base_bdevs_operational": 3, 00:14:47.469 "base_bdevs_list": [ 00:14:47.469 { 00:14:47.469 "name": "pt1", 00:14:47.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.469 "is_configured": true, 00:14:47.469 "data_offset": 2048, 00:14:47.469 "data_size": 63488 00:14:47.469 }, 00:14:47.469 { 00:14:47.469 "name": "pt2", 00:14:47.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.469 "is_configured": true, 00:14:47.469 "data_offset": 2048, 00:14:47.469 "data_size": 63488 00:14:47.469 }, 00:14:47.469 { 00:14:47.469 "name": "pt3", 00:14:47.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.469 "is_configured": true, 00:14:47.469 "data_offset": 2048, 00:14:47.469 "data_size": 63488 00:14:47.469 } 00:14:47.469 ] 00:14:47.469 }' 00:14:47.469 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.469 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.728 [2024-11-26 21:22:05.828950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.728 "name": "raid_bdev1", 00:14:47.728 "aliases": [ 00:14:47.728 "40ca1033-4f5b-4434-85a9-aa684850d73b" 00:14:47.728 ], 00:14:47.728 "product_name": "Raid Volume", 00:14:47.728 "block_size": 512, 00:14:47.728 "num_blocks": 126976, 00:14:47.728 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:47.728 "assigned_rate_limits": { 00:14:47.728 "rw_ios_per_sec": 0, 00:14:47.728 "rw_mbytes_per_sec": 0, 00:14:47.728 "r_mbytes_per_sec": 0, 00:14:47.728 "w_mbytes_per_sec": 0 00:14:47.728 }, 00:14:47.728 "claimed": false, 00:14:47.728 "zoned": false, 00:14:47.728 "supported_io_types": { 00:14:47.728 "read": true, 00:14:47.728 "write": true, 00:14:47.728 "unmap": false, 00:14:47.728 "flush": false, 00:14:47.728 "reset": true, 00:14:47.728 "nvme_admin": false, 00:14:47.728 "nvme_io": false, 00:14:47.728 "nvme_io_md": false, 00:14:47.728 "write_zeroes": true, 00:14:47.728 "zcopy": false, 00:14:47.728 "get_zone_info": false, 00:14:47.728 "zone_management": false, 00:14:47.728 "zone_append": false, 00:14:47.728 "compare": false, 00:14:47.728 "compare_and_write": false, 00:14:47.728 "abort": false, 00:14:47.728 "seek_hole": false, 00:14:47.728 "seek_data": false, 00:14:47.728 "copy": false, 00:14:47.728 "nvme_iov_md": false 00:14:47.728 }, 00:14:47.728 "driver_specific": { 00:14:47.728 "raid": { 00:14:47.728 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:47.728 "strip_size_kb": 64, 00:14:47.728 "state": "online", 00:14:47.728 "raid_level": "raid5f", 00:14:47.728 "superblock": true, 00:14:47.728 "num_base_bdevs": 3, 00:14:47.728 "num_base_bdevs_discovered": 3, 00:14:47.728 "num_base_bdevs_operational": 3, 00:14:47.728 "base_bdevs_list": [ 00:14:47.728 { 00:14:47.728 "name": "pt1", 00:14:47.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.728 "is_configured": true, 00:14:47.728 "data_offset": 2048, 00:14:47.728 "data_size": 63488 00:14:47.728 }, 00:14:47.728 { 00:14:47.728 "name": "pt2", 00:14:47.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.728 "is_configured": true, 00:14:47.728 "data_offset": 2048, 00:14:47.728 "data_size": 63488 00:14:47.728 }, 00:14:47.728 { 00:14:47.728 "name": "pt3", 00:14:47.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.728 "is_configured": true, 00:14:47.728 "data_offset": 2048, 00:14:47.728 "data_size": 63488 00:14:47.728 } 00:14:47.728 ] 00:14:47.728 } 00:14:47.728 } 00:14:47.728 }' 00:14:47.728 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:47.988 pt2 00:14:47.988 pt3' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.988 21:22:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.988 [2024-11-26 21:22:06.088459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40ca1033-4f5b-4434-85a9-aa684850d73b 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 40ca1033-4f5b-4434-85a9-aa684850d73b ']' 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.988 [2024-11-26 21:22:06.132234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.988 [2024-11-26 21:22:06.132258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.988 [2024-11-26 21:22:06.132317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.988 [2024-11-26 21:22:06.132375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.988 [2024-11-26 21:22:06.132385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.988 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 [2024-11-26 21:22:06.284114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:48.249 [2024-11-26 21:22:06.286098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:48.249 [2024-11-26 21:22:06.286149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:48.249 [2024-11-26 21:22:06.286189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:48.249 [2024-11-26 21:22:06.286226] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:48.249 [2024-11-26 21:22:06.286243] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:48.249 [2024-11-26 21:22:06.286258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.249 [2024-11-26 21:22:06.286266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:48.249 request: 00:14:48.249 { 00:14:48.249 "name": "raid_bdev1", 00:14:48.249 "raid_level": "raid5f", 00:14:48.249 "base_bdevs": [ 00:14:48.249 "malloc1", 00:14:48.249 "malloc2", 00:14:48.249 "malloc3" 00:14:48.249 ], 00:14:48.249 "strip_size_kb": 64, 00:14:48.249 "superblock": false, 00:14:48.249 "method": "bdev_raid_create", 00:14:48.249 "req_id": 1 00:14:48.249 } 00:14:48.249 Got JSON-RPC error response 00:14:48.249 response: 00:14:48.249 { 00:14:48.249 "code": -17, 00:14:48.249 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:48.249 } 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 [2024-11-26 21:22:06.340064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:48.249 [2024-11-26 21:22:06.340152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.249 [2024-11-26 21:22:06.340185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:48.249 [2024-11-26 21:22:06.340208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.249 [2024-11-26 21:22:06.342503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.249 [2024-11-26 21:22:06.342571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:48.249 [2024-11-26 21:22:06.342653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:48.249 [2024-11-26 21:22:06.342732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:48.249 pt1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.249 "name": "raid_bdev1", 00:14:48.249 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:48.249 "strip_size_kb": 64, 00:14:48.249 "state": "configuring", 00:14:48.249 "raid_level": "raid5f", 00:14:48.249 "superblock": true, 00:14:48.249 "num_base_bdevs": 3, 00:14:48.249 "num_base_bdevs_discovered": 1, 00:14:48.249 "num_base_bdevs_operational": 3, 00:14:48.249 "base_bdevs_list": [ 00:14:48.249 { 00:14:48.249 "name": "pt1", 00:14:48.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.249 "is_configured": true, 00:14:48.249 "data_offset": 2048, 00:14:48.249 "data_size": 63488 00:14:48.249 }, 00:14:48.249 { 00:14:48.249 "name": null, 00:14:48.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.249 "is_configured": false, 00:14:48.249 "data_offset": 2048, 00:14:48.249 "data_size": 63488 00:14:48.249 }, 00:14:48.249 { 00:14:48.249 "name": null, 00:14:48.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.249 "is_configured": false, 00:14:48.249 "data_offset": 2048, 00:14:48.249 "data_size": 63488 00:14:48.249 } 00:14:48.249 ] 00:14:48.249 }' 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.249 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.819 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:48.819 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:48.819 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.819 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.819 [2024-11-26 21:22:06.791363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:48.819 [2024-11-26 21:22:06.791405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.819 [2024-11-26 21:22:06.791420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:48.819 [2024-11-26 21:22:06.791427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.819 [2024-11-26 21:22:06.791742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.819 [2024-11-26 21:22:06.791765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:48.819 [2024-11-26 21:22:06.791825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:48.819 [2024-11-26 21:22:06.791848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:48.819 pt2 00:14:48.819 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.820 [2024-11-26 21:22:06.803361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.820 "name": "raid_bdev1", 00:14:48.820 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:48.820 "strip_size_kb": 64, 00:14:48.820 "state": "configuring", 00:14:48.820 "raid_level": "raid5f", 00:14:48.820 "superblock": true, 00:14:48.820 "num_base_bdevs": 3, 00:14:48.820 "num_base_bdevs_discovered": 1, 00:14:48.820 "num_base_bdevs_operational": 3, 00:14:48.820 "base_bdevs_list": [ 00:14:48.820 { 00:14:48.820 "name": "pt1", 00:14:48.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.820 "is_configured": true, 00:14:48.820 "data_offset": 2048, 00:14:48.820 "data_size": 63488 00:14:48.820 }, 00:14:48.820 { 00:14:48.820 "name": null, 00:14:48.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.820 "is_configured": false, 00:14:48.820 "data_offset": 0, 00:14:48.820 "data_size": 63488 00:14:48.820 }, 00:14:48.820 { 00:14:48.820 "name": null, 00:14:48.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.820 "is_configured": false, 00:14:48.820 "data_offset": 2048, 00:14:48.820 "data_size": 63488 00:14:48.820 } 00:14:48.820 ] 00:14:48.820 }' 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.820 21:22:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.390 [2024-11-26 21:22:07.274514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.390 [2024-11-26 21:22:07.274563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.390 [2024-11-26 21:22:07.274575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:49.390 [2024-11-26 21:22:07.274585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.390 [2024-11-26 21:22:07.274921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.390 [2024-11-26 21:22:07.274939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.390 [2024-11-26 21:22:07.275003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:49.390 [2024-11-26 21:22:07.275024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.390 pt2 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.390 [2024-11-26 21:22:07.286504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:49.390 [2024-11-26 21:22:07.286547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.390 [2024-11-26 21:22:07.286558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.390 [2024-11-26 21:22:07.286568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.390 [2024-11-26 21:22:07.286892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.390 [2024-11-26 21:22:07.286913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:49.390 [2024-11-26 21:22:07.286976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:49.390 [2024-11-26 21:22:07.286996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:49.390 [2024-11-26 21:22:07.287104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:49.390 [2024-11-26 21:22:07.287117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:49.390 [2024-11-26 21:22:07.287343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:49.390 [2024-11-26 21:22:07.292442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:49.390 pt3 00:14:49.390 [2024-11-26 21:22:07.292507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:49.390 [2024-11-26 21:22:07.292672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.390 "name": "raid_bdev1", 00:14:49.390 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:49.390 "strip_size_kb": 64, 00:14:49.390 "state": "online", 00:14:49.390 "raid_level": "raid5f", 00:14:49.390 "superblock": true, 00:14:49.390 "num_base_bdevs": 3, 00:14:49.390 "num_base_bdevs_discovered": 3, 00:14:49.390 "num_base_bdevs_operational": 3, 00:14:49.390 "base_bdevs_list": [ 00:14:49.390 { 00:14:49.390 "name": "pt1", 00:14:49.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.390 "is_configured": true, 00:14:49.390 "data_offset": 2048, 00:14:49.390 "data_size": 63488 00:14:49.390 }, 00:14:49.390 { 00:14:49.390 "name": "pt2", 00:14:49.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.390 "is_configured": true, 00:14:49.390 "data_offset": 2048, 00:14:49.390 "data_size": 63488 00:14:49.390 }, 00:14:49.390 { 00:14:49.390 "name": "pt3", 00:14:49.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:49.390 "is_configured": true, 00:14:49.390 "data_offset": 2048, 00:14:49.390 "data_size": 63488 00:14:49.390 } 00:14:49.390 ] 00:14:49.390 }' 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.390 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.650 [2024-11-26 21:22:07.734526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.650 "name": "raid_bdev1", 00:14:49.650 "aliases": [ 00:14:49.650 "40ca1033-4f5b-4434-85a9-aa684850d73b" 00:14:49.650 ], 00:14:49.650 "product_name": "Raid Volume", 00:14:49.650 "block_size": 512, 00:14:49.650 "num_blocks": 126976, 00:14:49.650 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:49.650 "assigned_rate_limits": { 00:14:49.650 "rw_ios_per_sec": 0, 00:14:49.650 "rw_mbytes_per_sec": 0, 00:14:49.650 "r_mbytes_per_sec": 0, 00:14:49.650 "w_mbytes_per_sec": 0 00:14:49.650 }, 00:14:49.650 "claimed": false, 00:14:49.650 "zoned": false, 00:14:49.650 "supported_io_types": { 00:14:49.650 "read": true, 00:14:49.650 "write": true, 00:14:49.650 "unmap": false, 00:14:49.650 "flush": false, 00:14:49.650 "reset": true, 00:14:49.650 "nvme_admin": false, 00:14:49.650 "nvme_io": false, 00:14:49.650 "nvme_io_md": false, 00:14:49.650 "write_zeroes": true, 00:14:49.650 "zcopy": false, 00:14:49.650 "get_zone_info": false, 00:14:49.650 "zone_management": false, 00:14:49.650 "zone_append": false, 00:14:49.650 "compare": false, 00:14:49.650 "compare_and_write": false, 00:14:49.650 "abort": false, 00:14:49.650 "seek_hole": false, 00:14:49.650 "seek_data": false, 00:14:49.650 "copy": false, 00:14:49.650 "nvme_iov_md": false 00:14:49.650 }, 00:14:49.650 "driver_specific": { 00:14:49.650 "raid": { 00:14:49.650 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:49.650 "strip_size_kb": 64, 00:14:49.650 "state": "online", 00:14:49.650 "raid_level": "raid5f", 00:14:49.650 "superblock": true, 00:14:49.650 "num_base_bdevs": 3, 00:14:49.650 "num_base_bdevs_discovered": 3, 00:14:49.650 "num_base_bdevs_operational": 3, 00:14:49.650 "base_bdevs_list": [ 00:14:49.650 { 00:14:49.650 "name": "pt1", 00:14:49.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.650 "is_configured": true, 00:14:49.650 "data_offset": 2048, 00:14:49.650 "data_size": 63488 00:14:49.650 }, 00:14:49.650 { 00:14:49.650 "name": "pt2", 00:14:49.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.650 "is_configured": true, 00:14:49.650 "data_offset": 2048, 00:14:49.650 "data_size": 63488 00:14:49.650 }, 00:14:49.650 { 00:14:49.650 "name": "pt3", 00:14:49.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:49.650 "is_configured": true, 00:14:49.650 "data_offset": 2048, 00:14:49.650 "data_size": 63488 00:14:49.650 } 00:14:49.650 ] 00:14:49.650 } 00:14:49.650 } 00:14:49.650 }' 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:49.650 pt2 00:14:49.650 pt3' 00:14:49.650 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.910 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:49.911 21:22:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.911 [2024-11-26 21:22:08.006056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 40ca1033-4f5b-4434-85a9-aa684850d73b '!=' 40ca1033-4f5b-4434-85a9-aa684850d73b ']' 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.911 [2024-11-26 21:22:08.053846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.911 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.170 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.170 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.170 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.170 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.170 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.170 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.170 "name": "raid_bdev1", 00:14:50.170 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:50.170 "strip_size_kb": 64, 00:14:50.170 "state": "online", 00:14:50.170 "raid_level": "raid5f", 00:14:50.170 "superblock": true, 00:14:50.170 "num_base_bdevs": 3, 00:14:50.170 "num_base_bdevs_discovered": 2, 00:14:50.171 "num_base_bdevs_operational": 2, 00:14:50.171 "base_bdevs_list": [ 00:14:50.171 { 00:14:50.171 "name": null, 00:14:50.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.171 "is_configured": false, 00:14:50.171 "data_offset": 0, 00:14:50.171 "data_size": 63488 00:14:50.171 }, 00:14:50.171 { 00:14:50.171 "name": "pt2", 00:14:50.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.171 "is_configured": true, 00:14:50.171 "data_offset": 2048, 00:14:50.171 "data_size": 63488 00:14:50.171 }, 00:14:50.171 { 00:14:50.171 "name": "pt3", 00:14:50.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.171 "is_configured": true, 00:14:50.171 "data_offset": 2048, 00:14:50.171 "data_size": 63488 00:14:50.171 } 00:14:50.171 ] 00:14:50.171 }' 00:14:50.171 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.171 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.430 [2024-11-26 21:22:08.469083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.430 [2024-11-26 21:22:08.469147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.430 [2024-11-26 21:22:08.469209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.430 [2024-11-26 21:22:08.469262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.430 [2024-11-26 21:22:08.469296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.430 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.431 [2024-11-26 21:22:08.552928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:50.431 [2024-11-26 21:22:08.553021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.431 [2024-11-26 21:22:08.553038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:50.431 [2024-11-26 21:22:08.553048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.431 [2024-11-26 21:22:08.555269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.431 [2024-11-26 21:22:08.555306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:50.431 [2024-11-26 21:22:08.555358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:50.431 [2024-11-26 21:22:08.555402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:50.431 pt2 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.431 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.691 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.691 "name": "raid_bdev1", 00:14:50.691 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:50.691 "strip_size_kb": 64, 00:14:50.691 "state": "configuring", 00:14:50.691 "raid_level": "raid5f", 00:14:50.691 "superblock": true, 00:14:50.691 "num_base_bdevs": 3, 00:14:50.691 "num_base_bdevs_discovered": 1, 00:14:50.691 "num_base_bdevs_operational": 2, 00:14:50.691 "base_bdevs_list": [ 00:14:50.691 { 00:14:50.691 "name": null, 00:14:50.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.691 "is_configured": false, 00:14:50.691 "data_offset": 2048, 00:14:50.691 "data_size": 63488 00:14:50.691 }, 00:14:50.691 { 00:14:50.691 "name": "pt2", 00:14:50.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.691 "is_configured": true, 00:14:50.691 "data_offset": 2048, 00:14:50.691 "data_size": 63488 00:14:50.691 }, 00:14:50.691 { 00:14:50.691 "name": null, 00:14:50.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.691 "is_configured": false, 00:14:50.691 "data_offset": 2048, 00:14:50.691 "data_size": 63488 00:14:50.691 } 00:14:50.691 ] 00:14:50.691 }' 00:14:50.691 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.691 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.951 [2024-11-26 21:22:08.980175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:50.951 [2024-11-26 21:22:08.980223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.951 [2024-11-26 21:22:08.980239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:50.951 [2024-11-26 21:22:08.980247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.951 [2024-11-26 21:22:08.980612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.951 [2024-11-26 21:22:08.980643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:50.951 [2024-11-26 21:22:08.980693] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:50.951 [2024-11-26 21:22:08.980716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:50.951 [2024-11-26 21:22:08.980799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:50.951 [2024-11-26 21:22:08.980815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:50.951 [2024-11-26 21:22:08.981064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:50.951 [2024-11-26 21:22:08.985817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:50.951 [2024-11-26 21:22:08.985837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:50.951 [2024-11-26 21:22:08.986133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.951 pt3 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.951 21:22:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.951 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.951 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.951 "name": "raid_bdev1", 00:14:50.951 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:50.951 "strip_size_kb": 64, 00:14:50.951 "state": "online", 00:14:50.951 "raid_level": "raid5f", 00:14:50.951 "superblock": true, 00:14:50.951 "num_base_bdevs": 3, 00:14:50.951 "num_base_bdevs_discovered": 2, 00:14:50.951 "num_base_bdevs_operational": 2, 00:14:50.951 "base_bdevs_list": [ 00:14:50.951 { 00:14:50.951 "name": null, 00:14:50.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.951 "is_configured": false, 00:14:50.951 "data_offset": 2048, 00:14:50.951 "data_size": 63488 00:14:50.951 }, 00:14:50.951 { 00:14:50.951 "name": "pt2", 00:14:50.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.951 "is_configured": true, 00:14:50.951 "data_offset": 2048, 00:14:50.951 "data_size": 63488 00:14:50.951 }, 00:14:50.951 { 00:14:50.951 "name": "pt3", 00:14:50.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.951 "is_configured": true, 00:14:50.951 "data_offset": 2048, 00:14:50.951 "data_size": 63488 00:14:50.952 } 00:14:50.952 ] 00:14:50.952 }' 00:14:50.952 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.952 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.521 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.521 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.521 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.521 [2024-11-26 21:22:09.447794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.521 [2024-11-26 21:22:09.447868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.521 [2024-11-26 21:22:09.447935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.521 [2024-11-26 21:22:09.448026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.522 [2024-11-26 21:22:09.448084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.522 [2024-11-26 21:22:09.523698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.522 [2024-11-26 21:22:09.523782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.522 [2024-11-26 21:22:09.523813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:51.522 [2024-11-26 21:22:09.523838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.522 [2024-11-26 21:22:09.526154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.522 [2024-11-26 21:22:09.526219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.522 [2024-11-26 21:22:09.526294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:51.522 [2024-11-26 21:22:09.526355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:51.522 [2024-11-26 21:22:09.526494] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:51.522 [2024-11-26 21:22:09.526546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.522 [2024-11-26 21:22:09.526587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:51.522 [2024-11-26 21:22:09.526700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.522 pt1 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.522 "name": "raid_bdev1", 00:14:51.522 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:51.522 "strip_size_kb": 64, 00:14:51.522 "state": "configuring", 00:14:51.522 "raid_level": "raid5f", 00:14:51.522 "superblock": true, 00:14:51.522 "num_base_bdevs": 3, 00:14:51.522 "num_base_bdevs_discovered": 1, 00:14:51.522 "num_base_bdevs_operational": 2, 00:14:51.522 "base_bdevs_list": [ 00:14:51.522 { 00:14:51.522 "name": null, 00:14:51.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.522 "is_configured": false, 00:14:51.522 "data_offset": 2048, 00:14:51.522 "data_size": 63488 00:14:51.522 }, 00:14:51.522 { 00:14:51.522 "name": "pt2", 00:14:51.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.522 "is_configured": true, 00:14:51.522 "data_offset": 2048, 00:14:51.522 "data_size": 63488 00:14:51.522 }, 00:14:51.522 { 00:14:51.522 "name": null, 00:14:51.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.522 "is_configured": false, 00:14:51.522 "data_offset": 2048, 00:14:51.522 "data_size": 63488 00:14:51.522 } 00:14:51.522 ] 00:14:51.522 }' 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.522 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.782 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:51.782 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:51.782 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.782 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.043 [2024-11-26 21:22:09.947040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.043 [2024-11-26 21:22:09.947081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.043 [2024-11-26 21:22:09.947096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:52.043 [2024-11-26 21:22:09.947104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.043 [2024-11-26 21:22:09.947467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.043 [2024-11-26 21:22:09.947488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.043 [2024-11-26 21:22:09.947539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:52.043 [2024-11-26 21:22:09.947555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.043 [2024-11-26 21:22:09.947652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:52.043 [2024-11-26 21:22:09.947668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.043 [2024-11-26 21:22:09.947909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:52.043 [2024-11-26 21:22:09.952715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:52.043 pt3 00:14:52.043 [2024-11-26 21:22:09.952789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:52.043 [2024-11-26 21:22:09.953009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.043 21:22:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.043 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.043 "name": "raid_bdev1", 00:14:52.043 "uuid": "40ca1033-4f5b-4434-85a9-aa684850d73b", 00:14:52.043 "strip_size_kb": 64, 00:14:52.043 "state": "online", 00:14:52.043 "raid_level": "raid5f", 00:14:52.043 "superblock": true, 00:14:52.043 "num_base_bdevs": 3, 00:14:52.043 "num_base_bdevs_discovered": 2, 00:14:52.043 "num_base_bdevs_operational": 2, 00:14:52.043 "base_bdevs_list": [ 00:14:52.043 { 00:14:52.043 "name": null, 00:14:52.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.043 "is_configured": false, 00:14:52.043 "data_offset": 2048, 00:14:52.043 "data_size": 63488 00:14:52.043 }, 00:14:52.043 { 00:14:52.043 "name": "pt2", 00:14:52.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.043 "is_configured": true, 00:14:52.043 "data_offset": 2048, 00:14:52.043 "data_size": 63488 00:14:52.043 }, 00:14:52.043 { 00:14:52.043 "name": "pt3", 00:14:52.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.043 "is_configured": true, 00:14:52.043 "data_offset": 2048, 00:14:52.043 "data_size": 63488 00:14:52.043 } 00:14:52.043 ] 00:14:52.043 }' 00:14:52.043 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.043 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.303 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:52.303 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:52.303 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.303 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.303 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:52.563 [2024-11-26 21:22:10.486193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 40ca1033-4f5b-4434-85a9-aa684850d73b '!=' 40ca1033-4f5b-4434-85a9-aa684850d73b ']' 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80934 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80934 ']' 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80934 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80934 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.563 killing process with pid 80934 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80934' 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80934 00:14:52.563 [2024-11-26 21:22:10.569682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.563 [2024-11-26 21:22:10.569740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.563 [2024-11-26 21:22:10.569780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.563 [2024-11-26 21:22:10.569790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:52.563 21:22:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80934 00:14:52.823 [2024-11-26 21:22:10.880823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.205 21:22:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:54.205 00:14:54.205 real 0m7.831s 00:14:54.205 user 0m12.006s 00:14:54.205 sys 0m1.536s 00:14:54.205 21:22:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.205 ************************************ 00:14:54.205 END TEST raid5f_superblock_test 00:14:54.205 ************************************ 00:14:54.205 21:22:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.205 21:22:12 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:54.205 21:22:12 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:54.205 21:22:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:54.205 21:22:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.205 21:22:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.205 ************************************ 00:14:54.205 START TEST raid5f_rebuild_test 00:14:54.205 ************************************ 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81373 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:54.205 21:22:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81373 00:14:54.206 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81373 ']' 00:14:54.206 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.206 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.206 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.206 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.206 21:22:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.206 [2024-11-26 21:22:12.229979] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:54.206 [2024-11-26 21:22:12.230196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:54.206 Zero copy mechanism will not be used. 00:14:54.206 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81373 ] 00:14:54.465 [2024-11-26 21:22:12.404268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.465 [2024-11-26 21:22:12.534951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.725 [2024-11-26 21:22:12.769002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.725 [2024-11-26 21:22:12.769165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.984 BaseBdev1_malloc 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.984 [2024-11-26 21:22:13.113515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:54.984 [2024-11-26 21:22:13.113684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.984 [2024-11-26 21:22:13.113714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:54.984 [2024-11-26 21:22:13.113728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.984 [2024-11-26 21:22:13.116254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.984 [2024-11-26 21:22:13.116295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.984 BaseBdev1 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.984 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.243 BaseBdev2_malloc 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.243 [2024-11-26 21:22:13.174298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:55.243 [2024-11-26 21:22:13.174362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.243 [2024-11-26 21:22:13.174388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.243 [2024-11-26 21:22:13.174400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.243 [2024-11-26 21:22:13.176635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.243 [2024-11-26 21:22:13.176742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.243 BaseBdev2 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.243 BaseBdev3_malloc 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.243 [2024-11-26 21:22:13.247397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:55.243 [2024-11-26 21:22:13.247448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.243 [2024-11-26 21:22:13.247471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.243 [2024-11-26 21:22:13.247483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.243 [2024-11-26 21:22:13.249856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.243 [2024-11-26 21:22:13.249896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.243 BaseBdev3 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.243 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.243 spare_malloc 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.244 spare_delay 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.244 [2024-11-26 21:22:13.319893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.244 [2024-11-26 21:22:13.320039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.244 [2024-11-26 21:22:13.320061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:55.244 [2024-11-26 21:22:13.320072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.244 [2024-11-26 21:22:13.322339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.244 [2024-11-26 21:22:13.322379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.244 spare 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.244 [2024-11-26 21:22:13.331945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.244 [2024-11-26 21:22:13.333949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.244 [2024-11-26 21:22:13.334023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.244 [2024-11-26 21:22:13.334102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.244 [2024-11-26 21:22:13.334113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.244 [2024-11-26 21:22:13.334355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:55.244 [2024-11-26 21:22:13.340059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.244 [2024-11-26 21:22:13.340127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.244 [2024-11-26 21:22:13.340330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.244 "name": "raid_bdev1", 00:14:55.244 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:14:55.244 "strip_size_kb": 64, 00:14:55.244 "state": "online", 00:14:55.244 "raid_level": "raid5f", 00:14:55.244 "superblock": false, 00:14:55.244 "num_base_bdevs": 3, 00:14:55.244 "num_base_bdevs_discovered": 3, 00:14:55.244 "num_base_bdevs_operational": 3, 00:14:55.244 "base_bdevs_list": [ 00:14:55.244 { 00:14:55.244 "name": "BaseBdev1", 00:14:55.244 "uuid": "96c08be4-d4d3-5862-9df4-249e141b11fa", 00:14:55.244 "is_configured": true, 00:14:55.244 "data_offset": 0, 00:14:55.244 "data_size": 65536 00:14:55.244 }, 00:14:55.244 { 00:14:55.244 "name": "BaseBdev2", 00:14:55.244 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:14:55.244 "is_configured": true, 00:14:55.244 "data_offset": 0, 00:14:55.244 "data_size": 65536 00:14:55.244 }, 00:14:55.244 { 00:14:55.244 "name": "BaseBdev3", 00:14:55.244 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:14:55.244 "is_configured": true, 00:14:55.244 "data_offset": 0, 00:14:55.244 "data_size": 65536 00:14:55.244 } 00:14:55.244 ] 00:14:55.244 }' 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.244 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.813 [2024-11-26 21:22:13.790648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.813 21:22:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:56.073 [2024-11-26 21:22:14.066051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:56.073 /dev/nbd0 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.073 1+0 records in 00:14:56.073 1+0 records out 00:14:56.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348971 s, 11.7 MB/s 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:56.073 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:56.643 512+0 records in 00:14:56.643 512+0 records out 00:14:56.643 67108864 bytes (67 MB, 64 MiB) copied, 0.406613 s, 165 MB/s 00:14:56.643 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.644 [2024-11-26 21:22:14.754312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.644 [2024-11-26 21:22:14.769201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.644 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.904 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.904 "name": "raid_bdev1", 00:14:56.904 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:14:56.904 "strip_size_kb": 64, 00:14:56.904 "state": "online", 00:14:56.904 "raid_level": "raid5f", 00:14:56.904 "superblock": false, 00:14:56.904 "num_base_bdevs": 3, 00:14:56.904 "num_base_bdevs_discovered": 2, 00:14:56.904 "num_base_bdevs_operational": 2, 00:14:56.904 "base_bdevs_list": [ 00:14:56.904 { 00:14:56.904 "name": null, 00:14:56.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.904 "is_configured": false, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 65536 00:14:56.904 }, 00:14:56.904 { 00:14:56.904 "name": "BaseBdev2", 00:14:56.904 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:14:56.904 "is_configured": true, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 65536 00:14:56.904 }, 00:14:56.904 { 00:14:56.904 "name": "BaseBdev3", 00:14:56.904 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:14:56.904 "is_configured": true, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 65536 00:14:56.904 } 00:14:56.904 ] 00:14:56.904 }' 00:14:56.904 21:22:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.904 21:22:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.163 21:22:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.163 21:22:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.164 21:22:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.164 [2024-11-26 21:22:15.264327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.164 [2024-11-26 21:22:15.279485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:57.164 21:22:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.164 21:22:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:57.164 [2024-11-26 21:22:15.286558] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.546 "name": "raid_bdev1", 00:14:58.546 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:14:58.546 "strip_size_kb": 64, 00:14:58.546 "state": "online", 00:14:58.546 "raid_level": "raid5f", 00:14:58.546 "superblock": false, 00:14:58.546 "num_base_bdevs": 3, 00:14:58.546 "num_base_bdevs_discovered": 3, 00:14:58.546 "num_base_bdevs_operational": 3, 00:14:58.546 "process": { 00:14:58.546 "type": "rebuild", 00:14:58.546 "target": "spare", 00:14:58.546 "progress": { 00:14:58.546 "blocks": 20480, 00:14:58.546 "percent": 15 00:14:58.546 } 00:14:58.546 }, 00:14:58.546 "base_bdevs_list": [ 00:14:58.546 { 00:14:58.546 "name": "spare", 00:14:58.546 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 65536 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": "BaseBdev2", 00:14:58.546 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 65536 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": "BaseBdev3", 00:14:58.546 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 65536 00:14:58.546 } 00:14:58.546 ] 00:14:58.546 }' 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 [2024-11-26 21:22:16.425594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.546 [2024-11-26 21:22:16.495354] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.546 [2024-11-26 21:22:16.495460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.546 [2024-11-26 21:22:16.495481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.546 [2024-11-26 21:22:16.495488] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.546 "name": "raid_bdev1", 00:14:58.546 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:14:58.546 "strip_size_kb": 64, 00:14:58.546 "state": "online", 00:14:58.546 "raid_level": "raid5f", 00:14:58.546 "superblock": false, 00:14:58.546 "num_base_bdevs": 3, 00:14:58.546 "num_base_bdevs_discovered": 2, 00:14:58.546 "num_base_bdevs_operational": 2, 00:14:58.546 "base_bdevs_list": [ 00:14:58.546 { 00:14:58.546 "name": null, 00:14:58.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.546 "is_configured": false, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 65536 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": "BaseBdev2", 00:14:58.546 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 65536 00:14:58.546 }, 00:14:58.546 { 00:14:58.546 "name": "BaseBdev3", 00:14:58.546 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:14:58.546 "is_configured": true, 00:14:58.546 "data_offset": 0, 00:14:58.546 "data_size": 65536 00:14:58.546 } 00:14:58.546 ] 00:14:58.546 }' 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.546 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.116 21:22:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.116 "name": "raid_bdev1", 00:14:59.116 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:14:59.116 "strip_size_kb": 64, 00:14:59.116 "state": "online", 00:14:59.116 "raid_level": "raid5f", 00:14:59.116 "superblock": false, 00:14:59.116 "num_base_bdevs": 3, 00:14:59.116 "num_base_bdevs_discovered": 2, 00:14:59.116 "num_base_bdevs_operational": 2, 00:14:59.116 "base_bdevs_list": [ 00:14:59.116 { 00:14:59.116 "name": null, 00:14:59.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.116 "is_configured": false, 00:14:59.116 "data_offset": 0, 00:14:59.116 "data_size": 65536 00:14:59.116 }, 00:14:59.116 { 00:14:59.116 "name": "BaseBdev2", 00:14:59.116 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:14:59.116 "is_configured": true, 00:14:59.116 "data_offset": 0, 00:14:59.116 "data_size": 65536 00:14:59.116 }, 00:14:59.116 { 00:14:59.116 "name": "BaseBdev3", 00:14:59.116 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:14:59.116 "is_configured": true, 00:14:59.116 "data_offset": 0, 00:14:59.116 "data_size": 65536 00:14:59.116 } 00:14:59.116 ] 00:14:59.116 }' 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.116 [2024-11-26 21:22:17.118076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.116 [2024-11-26 21:22:17.131849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.116 21:22:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:59.116 [2024-11-26 21:22:17.139446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.055 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.055 "name": "raid_bdev1", 00:15:00.055 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:00.055 "strip_size_kb": 64, 00:15:00.055 "state": "online", 00:15:00.055 "raid_level": "raid5f", 00:15:00.055 "superblock": false, 00:15:00.055 "num_base_bdevs": 3, 00:15:00.055 "num_base_bdevs_discovered": 3, 00:15:00.055 "num_base_bdevs_operational": 3, 00:15:00.055 "process": { 00:15:00.055 "type": "rebuild", 00:15:00.055 "target": "spare", 00:15:00.055 "progress": { 00:15:00.055 "blocks": 20480, 00:15:00.055 "percent": 15 00:15:00.055 } 00:15:00.055 }, 00:15:00.055 "base_bdevs_list": [ 00:15:00.055 { 00:15:00.055 "name": "spare", 00:15:00.055 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:00.055 "is_configured": true, 00:15:00.055 "data_offset": 0, 00:15:00.055 "data_size": 65536 00:15:00.055 }, 00:15:00.055 { 00:15:00.055 "name": "BaseBdev2", 00:15:00.055 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:00.055 "is_configured": true, 00:15:00.055 "data_offset": 0, 00:15:00.055 "data_size": 65536 00:15:00.055 }, 00:15:00.055 { 00:15:00.055 "name": "BaseBdev3", 00:15:00.055 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:00.056 "is_configured": true, 00:15:00.056 "data_offset": 0, 00:15:00.056 "data_size": 65536 00:15:00.056 } 00:15:00.056 ] 00:15:00.056 }' 00:15:00.056 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.316 "name": "raid_bdev1", 00:15:00.316 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:00.316 "strip_size_kb": 64, 00:15:00.316 "state": "online", 00:15:00.316 "raid_level": "raid5f", 00:15:00.316 "superblock": false, 00:15:00.316 "num_base_bdevs": 3, 00:15:00.316 "num_base_bdevs_discovered": 3, 00:15:00.316 "num_base_bdevs_operational": 3, 00:15:00.316 "process": { 00:15:00.316 "type": "rebuild", 00:15:00.316 "target": "spare", 00:15:00.316 "progress": { 00:15:00.316 "blocks": 22528, 00:15:00.316 "percent": 17 00:15:00.316 } 00:15:00.316 }, 00:15:00.316 "base_bdevs_list": [ 00:15:00.316 { 00:15:00.316 "name": "spare", 00:15:00.316 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:00.316 "is_configured": true, 00:15:00.316 "data_offset": 0, 00:15:00.316 "data_size": 65536 00:15:00.316 }, 00:15:00.316 { 00:15:00.316 "name": "BaseBdev2", 00:15:00.316 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:00.316 "is_configured": true, 00:15:00.316 "data_offset": 0, 00:15:00.316 "data_size": 65536 00:15:00.316 }, 00:15:00.316 { 00:15:00.316 "name": "BaseBdev3", 00:15:00.316 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:00.316 "is_configured": true, 00:15:00.316 "data_offset": 0, 00:15:00.316 "data_size": 65536 00:15:00.316 } 00:15:00.316 ] 00:15:00.316 }' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.316 21:22:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.704 "name": "raid_bdev1", 00:15:01.704 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:01.704 "strip_size_kb": 64, 00:15:01.704 "state": "online", 00:15:01.704 "raid_level": "raid5f", 00:15:01.704 "superblock": false, 00:15:01.704 "num_base_bdevs": 3, 00:15:01.704 "num_base_bdevs_discovered": 3, 00:15:01.704 "num_base_bdevs_operational": 3, 00:15:01.704 "process": { 00:15:01.704 "type": "rebuild", 00:15:01.704 "target": "spare", 00:15:01.704 "progress": { 00:15:01.704 "blocks": 45056, 00:15:01.704 "percent": 34 00:15:01.704 } 00:15:01.704 }, 00:15:01.704 "base_bdevs_list": [ 00:15:01.704 { 00:15:01.704 "name": "spare", 00:15:01.704 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:01.704 "is_configured": true, 00:15:01.704 "data_offset": 0, 00:15:01.704 "data_size": 65536 00:15:01.704 }, 00:15:01.704 { 00:15:01.704 "name": "BaseBdev2", 00:15:01.704 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:01.704 "is_configured": true, 00:15:01.704 "data_offset": 0, 00:15:01.704 "data_size": 65536 00:15:01.704 }, 00:15:01.704 { 00:15:01.704 "name": "BaseBdev3", 00:15:01.704 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:01.704 "is_configured": true, 00:15:01.704 "data_offset": 0, 00:15:01.704 "data_size": 65536 00:15:01.704 } 00:15:01.704 ] 00:15:01.704 }' 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.704 21:22:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.690 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.691 "name": "raid_bdev1", 00:15:02.691 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:02.691 "strip_size_kb": 64, 00:15:02.691 "state": "online", 00:15:02.691 "raid_level": "raid5f", 00:15:02.691 "superblock": false, 00:15:02.691 "num_base_bdevs": 3, 00:15:02.691 "num_base_bdevs_discovered": 3, 00:15:02.691 "num_base_bdevs_operational": 3, 00:15:02.691 "process": { 00:15:02.691 "type": "rebuild", 00:15:02.691 "target": "spare", 00:15:02.691 "progress": { 00:15:02.691 "blocks": 69632, 00:15:02.691 "percent": 53 00:15:02.691 } 00:15:02.691 }, 00:15:02.691 "base_bdevs_list": [ 00:15:02.691 { 00:15:02.691 "name": "spare", 00:15:02.691 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:02.691 "is_configured": true, 00:15:02.691 "data_offset": 0, 00:15:02.691 "data_size": 65536 00:15:02.691 }, 00:15:02.691 { 00:15:02.691 "name": "BaseBdev2", 00:15:02.691 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:02.691 "is_configured": true, 00:15:02.691 "data_offset": 0, 00:15:02.691 "data_size": 65536 00:15:02.691 }, 00:15:02.691 { 00:15:02.691 "name": "BaseBdev3", 00:15:02.691 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:02.691 "is_configured": true, 00:15:02.691 "data_offset": 0, 00:15:02.691 "data_size": 65536 00:15:02.691 } 00:15:02.691 ] 00:15:02.691 }' 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.691 21:22:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.632 21:22:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.891 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.891 "name": "raid_bdev1", 00:15:03.891 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:03.891 "strip_size_kb": 64, 00:15:03.891 "state": "online", 00:15:03.891 "raid_level": "raid5f", 00:15:03.891 "superblock": false, 00:15:03.891 "num_base_bdevs": 3, 00:15:03.891 "num_base_bdevs_discovered": 3, 00:15:03.891 "num_base_bdevs_operational": 3, 00:15:03.891 "process": { 00:15:03.891 "type": "rebuild", 00:15:03.891 "target": "spare", 00:15:03.891 "progress": { 00:15:03.891 "blocks": 92160, 00:15:03.891 "percent": 70 00:15:03.891 } 00:15:03.891 }, 00:15:03.891 "base_bdevs_list": [ 00:15:03.891 { 00:15:03.891 "name": "spare", 00:15:03.891 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:03.891 "is_configured": true, 00:15:03.891 "data_offset": 0, 00:15:03.891 "data_size": 65536 00:15:03.891 }, 00:15:03.891 { 00:15:03.891 "name": "BaseBdev2", 00:15:03.891 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:03.891 "is_configured": true, 00:15:03.891 "data_offset": 0, 00:15:03.891 "data_size": 65536 00:15:03.891 }, 00:15:03.891 { 00:15:03.891 "name": "BaseBdev3", 00:15:03.891 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:03.891 "is_configured": true, 00:15:03.891 "data_offset": 0, 00:15:03.891 "data_size": 65536 00:15:03.891 } 00:15:03.891 ] 00:15:03.891 }' 00:15:03.891 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.891 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.891 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.891 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.891 21:22:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.830 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.830 "name": "raid_bdev1", 00:15:04.830 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:04.830 "strip_size_kb": 64, 00:15:04.830 "state": "online", 00:15:04.830 "raid_level": "raid5f", 00:15:04.830 "superblock": false, 00:15:04.830 "num_base_bdevs": 3, 00:15:04.830 "num_base_bdevs_discovered": 3, 00:15:04.830 "num_base_bdevs_operational": 3, 00:15:04.830 "process": { 00:15:04.830 "type": "rebuild", 00:15:04.830 "target": "spare", 00:15:04.830 "progress": { 00:15:04.830 "blocks": 116736, 00:15:04.831 "percent": 89 00:15:04.831 } 00:15:04.831 }, 00:15:04.831 "base_bdevs_list": [ 00:15:04.831 { 00:15:04.831 "name": "spare", 00:15:04.831 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:04.831 "is_configured": true, 00:15:04.831 "data_offset": 0, 00:15:04.831 "data_size": 65536 00:15:04.831 }, 00:15:04.831 { 00:15:04.831 "name": "BaseBdev2", 00:15:04.831 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:04.831 "is_configured": true, 00:15:04.831 "data_offset": 0, 00:15:04.831 "data_size": 65536 00:15:04.831 }, 00:15:04.831 { 00:15:04.831 "name": "BaseBdev3", 00:15:04.831 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:04.831 "is_configured": true, 00:15:04.831 "data_offset": 0, 00:15:04.831 "data_size": 65536 00:15:04.831 } 00:15:04.831 ] 00:15:04.831 }' 00:15:04.831 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.091 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.091 21:22:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.091 21:22:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.091 21:22:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.661 [2024-11-26 21:22:23.582168] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:05.661 [2024-11-26 21:22:23.582350] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:05.661 [2024-11-26 21:22:23.582416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.921 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.181 "name": "raid_bdev1", 00:15:06.181 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:06.181 "strip_size_kb": 64, 00:15:06.181 "state": "online", 00:15:06.181 "raid_level": "raid5f", 00:15:06.181 "superblock": false, 00:15:06.181 "num_base_bdevs": 3, 00:15:06.181 "num_base_bdevs_discovered": 3, 00:15:06.181 "num_base_bdevs_operational": 3, 00:15:06.181 "base_bdevs_list": [ 00:15:06.181 { 00:15:06.181 "name": "spare", 00:15:06.181 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:06.181 "is_configured": true, 00:15:06.181 "data_offset": 0, 00:15:06.181 "data_size": 65536 00:15:06.181 }, 00:15:06.181 { 00:15:06.181 "name": "BaseBdev2", 00:15:06.181 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:06.181 "is_configured": true, 00:15:06.181 "data_offset": 0, 00:15:06.181 "data_size": 65536 00:15:06.181 }, 00:15:06.181 { 00:15:06.181 "name": "BaseBdev3", 00:15:06.181 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:06.181 "is_configured": true, 00:15:06.181 "data_offset": 0, 00:15:06.181 "data_size": 65536 00:15:06.181 } 00:15:06.181 ] 00:15:06.181 }' 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.181 "name": "raid_bdev1", 00:15:06.181 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:06.181 "strip_size_kb": 64, 00:15:06.181 "state": "online", 00:15:06.181 "raid_level": "raid5f", 00:15:06.181 "superblock": false, 00:15:06.181 "num_base_bdevs": 3, 00:15:06.181 "num_base_bdevs_discovered": 3, 00:15:06.181 "num_base_bdevs_operational": 3, 00:15:06.181 "base_bdevs_list": [ 00:15:06.181 { 00:15:06.181 "name": "spare", 00:15:06.181 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:06.181 "is_configured": true, 00:15:06.181 "data_offset": 0, 00:15:06.181 "data_size": 65536 00:15:06.181 }, 00:15:06.181 { 00:15:06.181 "name": "BaseBdev2", 00:15:06.181 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:06.181 "is_configured": true, 00:15:06.181 "data_offset": 0, 00:15:06.181 "data_size": 65536 00:15:06.181 }, 00:15:06.181 { 00:15:06.181 "name": "BaseBdev3", 00:15:06.181 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:06.181 "is_configured": true, 00:15:06.181 "data_offset": 0, 00:15:06.181 "data_size": 65536 00:15:06.181 } 00:15:06.181 ] 00:15:06.181 }' 00:15:06.181 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.182 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.442 "name": "raid_bdev1", 00:15:06.442 "uuid": "8266a6df-96e3-4554-8ad8-43bdb9fc99cc", 00:15:06.442 "strip_size_kb": 64, 00:15:06.442 "state": "online", 00:15:06.442 "raid_level": "raid5f", 00:15:06.442 "superblock": false, 00:15:06.442 "num_base_bdevs": 3, 00:15:06.442 "num_base_bdevs_discovered": 3, 00:15:06.442 "num_base_bdevs_operational": 3, 00:15:06.442 "base_bdevs_list": [ 00:15:06.442 { 00:15:06.442 "name": "spare", 00:15:06.442 "uuid": "9c5a7308-6cae-51af-8fb4-4598e99f84ca", 00:15:06.442 "is_configured": true, 00:15:06.442 "data_offset": 0, 00:15:06.442 "data_size": 65536 00:15:06.442 }, 00:15:06.442 { 00:15:06.442 "name": "BaseBdev2", 00:15:06.442 "uuid": "fd42b50a-3d55-53b0-9b49-c3f6bf67c2c8", 00:15:06.442 "is_configured": true, 00:15:06.442 "data_offset": 0, 00:15:06.442 "data_size": 65536 00:15:06.442 }, 00:15:06.442 { 00:15:06.442 "name": "BaseBdev3", 00:15:06.442 "uuid": "05132e25-d96a-5f40-94f8-b208c4a06228", 00:15:06.442 "is_configured": true, 00:15:06.442 "data_offset": 0, 00:15:06.442 "data_size": 65536 00:15:06.442 } 00:15:06.442 ] 00:15:06.442 }' 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.442 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 [2024-11-26 21:22:24.700970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.702 [2024-11-26 21:22:24.701065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.702 [2024-11-26 21:22:24.701166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.702 [2024-11-26 21:22:24.701254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.702 [2024-11-26 21:22:24.701303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:06.702 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.703 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:06.963 /dev/nbd0 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.963 1+0 records in 00:15:06.963 1+0 records out 00:15:06.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412733 s, 9.9 MB/s 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:06.963 21:22:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.963 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.963 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:07.223 /dev/nbd1 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.223 1+0 records in 00:15:07.223 1+0 records out 00:15:07.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306343 s, 13.4 MB/s 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.223 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.483 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81373 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81373 ']' 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81373 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81373 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81373' 00:15:07.743 killing process with pid 81373 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81373 00:15:07.743 Received shutdown signal, test time was about 60.000000 seconds 00:15:07.743 00:15:07.743 Latency(us) 00:15:07.743 [2024-11-26T21:22:25.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.743 [2024-11-26T21:22:25.899Z] =================================================================================================================== 00:15:07.743 [2024-11-26T21:22:25.899Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.743 [2024-11-26 21:22:25.895182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.743 21:22:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81373 00:15:08.313 [2024-11-26 21:22:26.301847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.697 00:15:09.697 real 0m15.324s 00:15:09.697 user 0m18.632s 00:15:09.697 sys 0m2.167s 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.697 ************************************ 00:15:09.697 END TEST raid5f_rebuild_test 00:15:09.697 ************************************ 00:15:09.697 21:22:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:09.697 21:22:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.697 21:22:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.697 21:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.697 ************************************ 00:15:09.697 START TEST raid5f_rebuild_test_sb 00:15:09.697 ************************************ 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81809 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81809 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81809 ']' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.697 21:22:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.697 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.697 Zero copy mechanism will not be used. 00:15:09.697 [2024-11-26 21:22:27.625286] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:09.697 [2024-11-26 21:22:27.625404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81809 ] 00:15:09.697 [2024-11-26 21:22:27.800201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.958 [2024-11-26 21:22:27.929819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.218 [2024-11-26 21:22:28.153635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.218 [2024-11-26 21:22:28.153679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.479 BaseBdev1_malloc 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.479 [2024-11-26 21:22:28.495621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.479 [2024-11-26 21:22:28.495785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.479 [2024-11-26 21:22:28.495826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.479 [2024-11-26 21:22:28.495858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.479 [2024-11-26 21:22:28.498178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.479 [2024-11-26 21:22:28.498252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.479 BaseBdev1 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.479 BaseBdev2_malloc 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.479 [2024-11-26 21:22:28.555460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.479 [2024-11-26 21:22:28.555570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.479 [2024-11-26 21:22:28.555610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.479 [2024-11-26 21:22:28.555640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.479 [2024-11-26 21:22:28.557908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.479 BaseBdev2 00:15:10.479 [2024-11-26 21:22:28.557990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.479 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 BaseBdev3_malloc 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 [2024-11-26 21:22:28.649491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.740 [2024-11-26 21:22:28.649590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.740 [2024-11-26 21:22:28.649628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.740 [2024-11-26 21:22:28.649658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.740 [2024-11-26 21:22:28.651918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.740 [2024-11-26 21:22:28.652007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.740 BaseBdev3 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 spare_malloc 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 spare_delay 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 [2024-11-26 21:22:28.721435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.740 [2024-11-26 21:22:28.721538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.740 [2024-11-26 21:22:28.721558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.740 [2024-11-26 21:22:28.721569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.740 [2024-11-26 21:22:28.723808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.740 [2024-11-26 21:22:28.723850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.740 spare 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 [2024-11-26 21:22:28.733495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.740 [2024-11-26 21:22:28.735445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.740 [2024-11-26 21:22:28.735544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.740 [2024-11-26 21:22:28.735737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.740 [2024-11-26 21:22:28.735781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.740 [2024-11-26 21:22:28.736055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.740 [2024-11-26 21:22:28.741745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.740 [2024-11-26 21:22:28.741805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.740 [2024-11-26 21:22:28.742006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.740 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.741 "name": "raid_bdev1", 00:15:10.741 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:10.741 "strip_size_kb": 64, 00:15:10.741 "state": "online", 00:15:10.741 "raid_level": "raid5f", 00:15:10.741 "superblock": true, 00:15:10.741 "num_base_bdevs": 3, 00:15:10.741 "num_base_bdevs_discovered": 3, 00:15:10.741 "num_base_bdevs_operational": 3, 00:15:10.741 "base_bdevs_list": [ 00:15:10.741 { 00:15:10.741 "name": "BaseBdev1", 00:15:10.741 "uuid": "5ff0f717-fae3-533a-9fa2-fbd2abd37009", 00:15:10.741 "is_configured": true, 00:15:10.741 "data_offset": 2048, 00:15:10.741 "data_size": 63488 00:15:10.741 }, 00:15:10.741 { 00:15:10.741 "name": "BaseBdev2", 00:15:10.741 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:10.741 "is_configured": true, 00:15:10.741 "data_offset": 2048, 00:15:10.741 "data_size": 63488 00:15:10.741 }, 00:15:10.741 { 00:15:10.741 "name": "BaseBdev3", 00:15:10.741 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:10.741 "is_configured": true, 00:15:10.741 "data_offset": 2048, 00:15:10.741 "data_size": 63488 00:15:10.741 } 00:15:10.741 ] 00:15:10.741 }' 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.741 21:22:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.310 [2024-11-26 21:22:29.196123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.310 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:11.310 [2024-11-26 21:22:29.447521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:11.570 /dev/nbd0 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.570 1+0 records in 00:15:11.570 1+0 records out 00:15:11.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446434 s, 9.2 MB/s 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:11.570 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:11.830 496+0 records in 00:15:11.830 496+0 records out 00:15:11.830 65011712 bytes (65 MB, 62 MiB) copied, 0.361088 s, 180 MB/s 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.830 21:22:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.097 [2024-11-26 21:22:30.127234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.097 [2024-11-26 21:22:30.142429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.097 "name": "raid_bdev1", 00:15:12.097 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:12.097 "strip_size_kb": 64, 00:15:12.097 "state": "online", 00:15:12.097 "raid_level": "raid5f", 00:15:12.097 "superblock": true, 00:15:12.097 "num_base_bdevs": 3, 00:15:12.097 "num_base_bdevs_discovered": 2, 00:15:12.097 "num_base_bdevs_operational": 2, 00:15:12.097 "base_bdevs_list": [ 00:15:12.097 { 00:15:12.097 "name": null, 00:15:12.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.097 "is_configured": false, 00:15:12.097 "data_offset": 0, 00:15:12.097 "data_size": 63488 00:15:12.097 }, 00:15:12.097 { 00:15:12.097 "name": "BaseBdev2", 00:15:12.097 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:12.097 "is_configured": true, 00:15:12.097 "data_offset": 2048, 00:15:12.097 "data_size": 63488 00:15:12.097 }, 00:15:12.097 { 00:15:12.097 "name": "BaseBdev3", 00:15:12.097 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:12.097 "is_configured": true, 00:15:12.097 "data_offset": 2048, 00:15:12.097 "data_size": 63488 00:15:12.097 } 00:15:12.097 ] 00:15:12.097 }' 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.097 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.700 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.700 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.700 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.700 [2024-11-26 21:22:30.577735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.700 [2024-11-26 21:22:30.593285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:12.700 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.700 21:22:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:12.700 [2024-11-26 21:22:30.600671] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.640 "name": "raid_bdev1", 00:15:13.640 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:13.640 "strip_size_kb": 64, 00:15:13.640 "state": "online", 00:15:13.640 "raid_level": "raid5f", 00:15:13.640 "superblock": true, 00:15:13.640 "num_base_bdevs": 3, 00:15:13.640 "num_base_bdevs_discovered": 3, 00:15:13.640 "num_base_bdevs_operational": 3, 00:15:13.640 "process": { 00:15:13.640 "type": "rebuild", 00:15:13.640 "target": "spare", 00:15:13.640 "progress": { 00:15:13.640 "blocks": 20480, 00:15:13.640 "percent": 16 00:15:13.640 } 00:15:13.640 }, 00:15:13.640 "base_bdevs_list": [ 00:15:13.640 { 00:15:13.640 "name": "spare", 00:15:13.640 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:13.640 "is_configured": true, 00:15:13.640 "data_offset": 2048, 00:15:13.640 "data_size": 63488 00:15:13.640 }, 00:15:13.640 { 00:15:13.640 "name": "BaseBdev2", 00:15:13.640 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:13.640 "is_configured": true, 00:15:13.640 "data_offset": 2048, 00:15:13.640 "data_size": 63488 00:15:13.640 }, 00:15:13.640 { 00:15:13.640 "name": "BaseBdev3", 00:15:13.640 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:13.640 "is_configured": true, 00:15:13.640 "data_offset": 2048, 00:15:13.640 "data_size": 63488 00:15:13.640 } 00:15:13.640 ] 00:15:13.640 }' 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.640 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.640 [2024-11-26 21:22:31.759936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.900 [2024-11-26 21:22:31.809431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.900 [2024-11-26 21:22:31.809539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.900 [2024-11-26 21:22:31.809586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.900 [2024-11-26 21:22:31.809607] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.900 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.900 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.900 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.900 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.901 "name": "raid_bdev1", 00:15:13.901 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:13.901 "strip_size_kb": 64, 00:15:13.901 "state": "online", 00:15:13.901 "raid_level": "raid5f", 00:15:13.901 "superblock": true, 00:15:13.901 "num_base_bdevs": 3, 00:15:13.901 "num_base_bdevs_discovered": 2, 00:15:13.901 "num_base_bdevs_operational": 2, 00:15:13.901 "base_bdevs_list": [ 00:15:13.901 { 00:15:13.901 "name": null, 00:15:13.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.901 "is_configured": false, 00:15:13.901 "data_offset": 0, 00:15:13.901 "data_size": 63488 00:15:13.901 }, 00:15:13.901 { 00:15:13.901 "name": "BaseBdev2", 00:15:13.901 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:13.901 "is_configured": true, 00:15:13.901 "data_offset": 2048, 00:15:13.901 "data_size": 63488 00:15:13.901 }, 00:15:13.901 { 00:15:13.901 "name": "BaseBdev3", 00:15:13.901 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:13.901 "is_configured": true, 00:15:13.901 "data_offset": 2048, 00:15:13.901 "data_size": 63488 00:15:13.901 } 00:15:13.901 ] 00:15:13.901 }' 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.901 21:22:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.161 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.421 "name": "raid_bdev1", 00:15:14.421 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:14.421 "strip_size_kb": 64, 00:15:14.421 "state": "online", 00:15:14.421 "raid_level": "raid5f", 00:15:14.421 "superblock": true, 00:15:14.421 "num_base_bdevs": 3, 00:15:14.421 "num_base_bdevs_discovered": 2, 00:15:14.421 "num_base_bdevs_operational": 2, 00:15:14.421 "base_bdevs_list": [ 00:15:14.421 { 00:15:14.421 "name": null, 00:15:14.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.421 "is_configured": false, 00:15:14.421 "data_offset": 0, 00:15:14.421 "data_size": 63488 00:15:14.421 }, 00:15:14.421 { 00:15:14.421 "name": "BaseBdev2", 00:15:14.421 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:14.421 "is_configured": true, 00:15:14.421 "data_offset": 2048, 00:15:14.421 "data_size": 63488 00:15:14.421 }, 00:15:14.421 { 00:15:14.421 "name": "BaseBdev3", 00:15:14.421 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:14.421 "is_configured": true, 00:15:14.421 "data_offset": 2048, 00:15:14.421 "data_size": 63488 00:15:14.421 } 00:15:14.421 ] 00:15:14.421 }' 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.421 [2024-11-26 21:22:32.441219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.421 [2024-11-26 21:22:32.455039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.421 21:22:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:14.421 [2024-11-26 21:22:32.462480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.362 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.362 "name": "raid_bdev1", 00:15:15.362 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:15.362 "strip_size_kb": 64, 00:15:15.362 "state": "online", 00:15:15.362 "raid_level": "raid5f", 00:15:15.362 "superblock": true, 00:15:15.362 "num_base_bdevs": 3, 00:15:15.362 "num_base_bdevs_discovered": 3, 00:15:15.362 "num_base_bdevs_operational": 3, 00:15:15.362 "process": { 00:15:15.362 "type": "rebuild", 00:15:15.362 "target": "spare", 00:15:15.362 "progress": { 00:15:15.362 "blocks": 20480, 00:15:15.362 "percent": 16 00:15:15.362 } 00:15:15.362 }, 00:15:15.362 "base_bdevs_list": [ 00:15:15.362 { 00:15:15.362 "name": "spare", 00:15:15.362 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:15.362 "is_configured": true, 00:15:15.362 "data_offset": 2048, 00:15:15.362 "data_size": 63488 00:15:15.362 }, 00:15:15.362 { 00:15:15.362 "name": "BaseBdev2", 00:15:15.362 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:15.362 "is_configured": true, 00:15:15.362 "data_offset": 2048, 00:15:15.362 "data_size": 63488 00:15:15.362 }, 00:15:15.362 { 00:15:15.362 "name": "BaseBdev3", 00:15:15.362 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:15.363 "is_configured": true, 00:15:15.363 "data_offset": 2048, 00:15:15.363 "data_size": 63488 00:15:15.363 } 00:15:15.363 ] 00:15:15.363 }' 00:15:15.363 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:15.623 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=555 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.623 "name": "raid_bdev1", 00:15:15.623 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:15.623 "strip_size_kb": 64, 00:15:15.623 "state": "online", 00:15:15.623 "raid_level": "raid5f", 00:15:15.623 "superblock": true, 00:15:15.623 "num_base_bdevs": 3, 00:15:15.623 "num_base_bdevs_discovered": 3, 00:15:15.623 "num_base_bdevs_operational": 3, 00:15:15.623 "process": { 00:15:15.623 "type": "rebuild", 00:15:15.623 "target": "spare", 00:15:15.623 "progress": { 00:15:15.623 "blocks": 22528, 00:15:15.623 "percent": 17 00:15:15.623 } 00:15:15.623 }, 00:15:15.623 "base_bdevs_list": [ 00:15:15.623 { 00:15:15.623 "name": "spare", 00:15:15.623 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:15.623 "is_configured": true, 00:15:15.623 "data_offset": 2048, 00:15:15.623 "data_size": 63488 00:15:15.623 }, 00:15:15.623 { 00:15:15.623 "name": "BaseBdev2", 00:15:15.623 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:15.623 "is_configured": true, 00:15:15.623 "data_offset": 2048, 00:15:15.623 "data_size": 63488 00:15:15.623 }, 00:15:15.623 { 00:15:15.623 "name": "BaseBdev3", 00:15:15.623 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:15.623 "is_configured": true, 00:15:15.623 "data_offset": 2048, 00:15:15.623 "data_size": 63488 00:15:15.623 } 00:15:15.623 ] 00:15:15.623 }' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.623 21:22:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.563 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.822 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.822 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.822 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.822 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.822 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.822 "name": "raid_bdev1", 00:15:16.822 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:16.822 "strip_size_kb": 64, 00:15:16.822 "state": "online", 00:15:16.823 "raid_level": "raid5f", 00:15:16.823 "superblock": true, 00:15:16.823 "num_base_bdevs": 3, 00:15:16.823 "num_base_bdevs_discovered": 3, 00:15:16.823 "num_base_bdevs_operational": 3, 00:15:16.823 "process": { 00:15:16.823 "type": "rebuild", 00:15:16.823 "target": "spare", 00:15:16.823 "progress": { 00:15:16.823 "blocks": 45056, 00:15:16.823 "percent": 35 00:15:16.823 } 00:15:16.823 }, 00:15:16.823 "base_bdevs_list": [ 00:15:16.823 { 00:15:16.823 "name": "spare", 00:15:16.823 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:16.823 "is_configured": true, 00:15:16.823 "data_offset": 2048, 00:15:16.823 "data_size": 63488 00:15:16.823 }, 00:15:16.823 { 00:15:16.823 "name": "BaseBdev2", 00:15:16.823 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:16.823 "is_configured": true, 00:15:16.823 "data_offset": 2048, 00:15:16.823 "data_size": 63488 00:15:16.823 }, 00:15:16.823 { 00:15:16.823 "name": "BaseBdev3", 00:15:16.823 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:16.823 "is_configured": true, 00:15:16.823 "data_offset": 2048, 00:15:16.823 "data_size": 63488 00:15:16.823 } 00:15:16.823 ] 00:15:16.823 }' 00:15:16.823 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.823 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.823 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.823 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.823 21:22:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.763 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.763 "name": "raid_bdev1", 00:15:17.763 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:17.763 "strip_size_kb": 64, 00:15:17.763 "state": "online", 00:15:17.763 "raid_level": "raid5f", 00:15:17.763 "superblock": true, 00:15:17.763 "num_base_bdevs": 3, 00:15:17.763 "num_base_bdevs_discovered": 3, 00:15:17.763 "num_base_bdevs_operational": 3, 00:15:17.763 "process": { 00:15:17.763 "type": "rebuild", 00:15:17.763 "target": "spare", 00:15:17.763 "progress": { 00:15:17.763 "blocks": 67584, 00:15:17.763 "percent": 53 00:15:17.763 } 00:15:17.763 }, 00:15:17.763 "base_bdevs_list": [ 00:15:17.763 { 00:15:17.763 "name": "spare", 00:15:17.763 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 2048, 00:15:17.763 "data_size": 63488 00:15:17.763 }, 00:15:17.763 { 00:15:17.763 "name": "BaseBdev2", 00:15:17.763 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 2048, 00:15:17.763 "data_size": 63488 00:15:17.763 }, 00:15:17.763 { 00:15:17.763 "name": "BaseBdev3", 00:15:17.763 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 2048, 00:15:17.763 "data_size": 63488 00:15:17.763 } 00:15:17.763 ] 00:15:17.763 }' 00:15:17.764 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.024 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.024 21:22:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.024 21:22:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.024 21:22:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.963 "name": "raid_bdev1", 00:15:18.963 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:18.963 "strip_size_kb": 64, 00:15:18.963 "state": "online", 00:15:18.963 "raid_level": "raid5f", 00:15:18.963 "superblock": true, 00:15:18.963 "num_base_bdevs": 3, 00:15:18.963 "num_base_bdevs_discovered": 3, 00:15:18.963 "num_base_bdevs_operational": 3, 00:15:18.963 "process": { 00:15:18.963 "type": "rebuild", 00:15:18.963 "target": "spare", 00:15:18.963 "progress": { 00:15:18.963 "blocks": 92160, 00:15:18.963 "percent": 72 00:15:18.963 } 00:15:18.963 }, 00:15:18.963 "base_bdevs_list": [ 00:15:18.963 { 00:15:18.963 "name": "spare", 00:15:18.963 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:18.963 "is_configured": true, 00:15:18.963 "data_offset": 2048, 00:15:18.963 "data_size": 63488 00:15:18.963 }, 00:15:18.963 { 00:15:18.963 "name": "BaseBdev2", 00:15:18.963 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:18.963 "is_configured": true, 00:15:18.963 "data_offset": 2048, 00:15:18.963 "data_size": 63488 00:15:18.963 }, 00:15:18.963 { 00:15:18.963 "name": "BaseBdev3", 00:15:18.963 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:18.963 "is_configured": true, 00:15:18.963 "data_offset": 2048, 00:15:18.963 "data_size": 63488 00:15:18.963 } 00:15:18.963 ] 00:15:18.963 }' 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.963 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.223 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.223 21:22:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.160 "name": "raid_bdev1", 00:15:20.160 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:20.160 "strip_size_kb": 64, 00:15:20.160 "state": "online", 00:15:20.160 "raid_level": "raid5f", 00:15:20.160 "superblock": true, 00:15:20.160 "num_base_bdevs": 3, 00:15:20.160 "num_base_bdevs_discovered": 3, 00:15:20.160 "num_base_bdevs_operational": 3, 00:15:20.160 "process": { 00:15:20.160 "type": "rebuild", 00:15:20.160 "target": "spare", 00:15:20.160 "progress": { 00:15:20.160 "blocks": 114688, 00:15:20.160 "percent": 90 00:15:20.160 } 00:15:20.160 }, 00:15:20.160 "base_bdevs_list": [ 00:15:20.160 { 00:15:20.160 "name": "spare", 00:15:20.160 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:20.160 "is_configured": true, 00:15:20.160 "data_offset": 2048, 00:15:20.160 "data_size": 63488 00:15:20.160 }, 00:15:20.160 { 00:15:20.160 "name": "BaseBdev2", 00:15:20.160 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:20.160 "is_configured": true, 00:15:20.160 "data_offset": 2048, 00:15:20.160 "data_size": 63488 00:15:20.160 }, 00:15:20.160 { 00:15:20.160 "name": "BaseBdev3", 00:15:20.160 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:20.160 "is_configured": true, 00:15:20.160 "data_offset": 2048, 00:15:20.160 "data_size": 63488 00:15:20.160 } 00:15:20.160 ] 00:15:20.160 }' 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.160 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.420 21:22:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.679 [2024-11-26 21:22:38.703174] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.679 [2024-11-26 21:22:38.703348] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.680 [2024-11-26 21:22:38.703476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.254 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.257 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.257 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.257 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.257 "name": "raid_bdev1", 00:15:21.257 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:21.257 "strip_size_kb": 64, 00:15:21.257 "state": "online", 00:15:21.257 "raid_level": "raid5f", 00:15:21.257 "superblock": true, 00:15:21.257 "num_base_bdevs": 3, 00:15:21.257 "num_base_bdevs_discovered": 3, 00:15:21.257 "num_base_bdevs_operational": 3, 00:15:21.257 "base_bdevs_list": [ 00:15:21.257 { 00:15:21.257 "name": "spare", 00:15:21.257 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:21.257 "is_configured": true, 00:15:21.257 "data_offset": 2048, 00:15:21.257 "data_size": 63488 00:15:21.257 }, 00:15:21.257 { 00:15:21.257 "name": "BaseBdev2", 00:15:21.257 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:21.257 "is_configured": true, 00:15:21.258 "data_offset": 2048, 00:15:21.258 "data_size": 63488 00:15:21.258 }, 00:15:21.258 { 00:15:21.258 "name": "BaseBdev3", 00:15:21.258 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:21.258 "is_configured": true, 00:15:21.258 "data_offset": 2048, 00:15:21.258 "data_size": 63488 00:15:21.258 } 00:15:21.258 ] 00:15:21.258 }' 00:15:21.258 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.527 "name": "raid_bdev1", 00:15:21.527 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:21.527 "strip_size_kb": 64, 00:15:21.527 "state": "online", 00:15:21.527 "raid_level": "raid5f", 00:15:21.527 "superblock": true, 00:15:21.527 "num_base_bdevs": 3, 00:15:21.527 "num_base_bdevs_discovered": 3, 00:15:21.527 "num_base_bdevs_operational": 3, 00:15:21.527 "base_bdevs_list": [ 00:15:21.527 { 00:15:21.527 "name": "spare", 00:15:21.527 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:21.527 "is_configured": true, 00:15:21.527 "data_offset": 2048, 00:15:21.527 "data_size": 63488 00:15:21.527 }, 00:15:21.527 { 00:15:21.527 "name": "BaseBdev2", 00:15:21.527 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:21.527 "is_configured": true, 00:15:21.527 "data_offset": 2048, 00:15:21.527 "data_size": 63488 00:15:21.527 }, 00:15:21.527 { 00:15:21.527 "name": "BaseBdev3", 00:15:21.527 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:21.527 "is_configured": true, 00:15:21.527 "data_offset": 2048, 00:15:21.527 "data_size": 63488 00:15:21.527 } 00:15:21.527 ] 00:15:21.527 }' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.527 "name": "raid_bdev1", 00:15:21.527 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:21.527 "strip_size_kb": 64, 00:15:21.527 "state": "online", 00:15:21.527 "raid_level": "raid5f", 00:15:21.527 "superblock": true, 00:15:21.527 "num_base_bdevs": 3, 00:15:21.527 "num_base_bdevs_discovered": 3, 00:15:21.527 "num_base_bdevs_operational": 3, 00:15:21.527 "base_bdevs_list": [ 00:15:21.527 { 00:15:21.527 "name": "spare", 00:15:21.527 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:21.527 "is_configured": true, 00:15:21.527 "data_offset": 2048, 00:15:21.527 "data_size": 63488 00:15:21.527 }, 00:15:21.527 { 00:15:21.527 "name": "BaseBdev2", 00:15:21.527 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:21.527 "is_configured": true, 00:15:21.527 "data_offset": 2048, 00:15:21.527 "data_size": 63488 00:15:21.527 }, 00:15:21.527 { 00:15:21.527 "name": "BaseBdev3", 00:15:21.527 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:21.527 "is_configured": true, 00:15:21.527 "data_offset": 2048, 00:15:21.527 "data_size": 63488 00:15:21.527 } 00:15:21.527 ] 00:15:21.527 }' 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.527 21:22:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.096 [2024-11-26 21:22:40.050938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.096 [2024-11-26 21:22:40.051043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.096 [2024-11-26 21:22:40.051145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.096 [2024-11-26 21:22:40.051235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.096 [2024-11-26 21:22:40.051291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.096 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:22.356 /dev/nbd0 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.356 1+0 records in 00:15:22.356 1+0 records out 00:15:22.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602549 s, 6.8 MB/s 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.356 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:22.616 /dev/nbd1 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.616 1+0 records in 00:15:22.616 1+0 records out 00:15:22.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278283 s, 14.7 MB/s 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.616 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.876 21:22:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.137 [2024-11-26 21:22:41.167424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.137 [2024-11-26 21:22:41.167488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.137 [2024-11-26 21:22:41.167509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:23.137 [2024-11-26 21:22:41.167519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.137 [2024-11-26 21:22:41.169896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.137 [2024-11-26 21:22:41.169937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.137 [2024-11-26 21:22:41.170021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.137 [2024-11-26 21:22:41.170069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.137 [2024-11-26 21:22:41.170197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.137 [2024-11-26 21:22:41.170307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.137 spare 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.137 [2024-11-26 21:22:41.270197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:23.137 [2024-11-26 21:22:41.270226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.137 [2024-11-26 21:22:41.270498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:23.137 [2024-11-26 21:22:41.275546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:23.137 [2024-11-26 21:22:41.275567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:23.137 [2024-11-26 21:22:41.275736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.137 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.397 "name": "raid_bdev1", 00:15:23.397 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:23.397 "strip_size_kb": 64, 00:15:23.397 "state": "online", 00:15:23.397 "raid_level": "raid5f", 00:15:23.397 "superblock": true, 00:15:23.397 "num_base_bdevs": 3, 00:15:23.397 "num_base_bdevs_discovered": 3, 00:15:23.397 "num_base_bdevs_operational": 3, 00:15:23.397 "base_bdevs_list": [ 00:15:23.397 { 00:15:23.397 "name": "spare", 00:15:23.397 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:23.397 "is_configured": true, 00:15:23.397 "data_offset": 2048, 00:15:23.397 "data_size": 63488 00:15:23.397 }, 00:15:23.397 { 00:15:23.397 "name": "BaseBdev2", 00:15:23.397 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:23.397 "is_configured": true, 00:15:23.397 "data_offset": 2048, 00:15:23.397 "data_size": 63488 00:15:23.397 }, 00:15:23.397 { 00:15:23.397 "name": "BaseBdev3", 00:15:23.397 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:23.397 "is_configured": true, 00:15:23.397 "data_offset": 2048, 00:15:23.397 "data_size": 63488 00:15:23.397 } 00:15:23.397 ] 00:15:23.397 }' 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.397 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.657 "name": "raid_bdev1", 00:15:23.657 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:23.657 "strip_size_kb": 64, 00:15:23.657 "state": "online", 00:15:23.657 "raid_level": "raid5f", 00:15:23.657 "superblock": true, 00:15:23.657 "num_base_bdevs": 3, 00:15:23.657 "num_base_bdevs_discovered": 3, 00:15:23.657 "num_base_bdevs_operational": 3, 00:15:23.657 "base_bdevs_list": [ 00:15:23.657 { 00:15:23.657 "name": "spare", 00:15:23.657 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 2048, 00:15:23.657 "data_size": 63488 00:15:23.657 }, 00:15:23.657 { 00:15:23.657 "name": "BaseBdev2", 00:15:23.657 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 2048, 00:15:23.657 "data_size": 63488 00:15:23.657 }, 00:15:23.657 { 00:15:23.657 "name": "BaseBdev3", 00:15:23.657 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:23.657 "is_configured": true, 00:15:23.657 "data_offset": 2048, 00:15:23.657 "data_size": 63488 00:15:23.657 } 00:15:23.657 ] 00:15:23.657 }' 00:15:23.657 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.917 [2024-11-26 21:22:41.929343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.917 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.917 "name": "raid_bdev1", 00:15:23.917 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:23.917 "strip_size_kb": 64, 00:15:23.917 "state": "online", 00:15:23.917 "raid_level": "raid5f", 00:15:23.917 "superblock": true, 00:15:23.917 "num_base_bdevs": 3, 00:15:23.917 "num_base_bdevs_discovered": 2, 00:15:23.917 "num_base_bdevs_operational": 2, 00:15:23.917 "base_bdevs_list": [ 00:15:23.917 { 00:15:23.917 "name": null, 00:15:23.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.917 "is_configured": false, 00:15:23.917 "data_offset": 0, 00:15:23.917 "data_size": 63488 00:15:23.917 }, 00:15:23.917 { 00:15:23.917 "name": "BaseBdev2", 00:15:23.917 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:23.917 "is_configured": true, 00:15:23.917 "data_offset": 2048, 00:15:23.917 "data_size": 63488 00:15:23.917 }, 00:15:23.917 { 00:15:23.918 "name": "BaseBdev3", 00:15:23.918 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:23.918 "is_configured": true, 00:15:23.918 "data_offset": 2048, 00:15:23.918 "data_size": 63488 00:15:23.918 } 00:15:23.918 ] 00:15:23.918 }' 00:15:23.918 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.918 21:22:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 21:22:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.488 21:22:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.488 21:22:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 [2024-11-26 21:22:42.384650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.488 [2024-11-26 21:22:42.384769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.488 [2024-11-26 21:22:42.384786] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.488 [2024-11-26 21:22:42.384824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.488 [2024-11-26 21:22:42.400417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:24.488 21:22:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.488 21:22:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.488 [2024-11-26 21:22:42.407332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.428 "name": "raid_bdev1", 00:15:25.428 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:25.428 "strip_size_kb": 64, 00:15:25.428 "state": "online", 00:15:25.428 "raid_level": "raid5f", 00:15:25.428 "superblock": true, 00:15:25.428 "num_base_bdevs": 3, 00:15:25.428 "num_base_bdevs_discovered": 3, 00:15:25.428 "num_base_bdevs_operational": 3, 00:15:25.428 "process": { 00:15:25.428 "type": "rebuild", 00:15:25.428 "target": "spare", 00:15:25.428 "progress": { 00:15:25.428 "blocks": 20480, 00:15:25.428 "percent": 16 00:15:25.428 } 00:15:25.428 }, 00:15:25.428 "base_bdevs_list": [ 00:15:25.428 { 00:15:25.428 "name": "spare", 00:15:25.428 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:25.428 "is_configured": true, 00:15:25.428 "data_offset": 2048, 00:15:25.428 "data_size": 63488 00:15:25.428 }, 00:15:25.428 { 00:15:25.428 "name": "BaseBdev2", 00:15:25.428 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:25.428 "is_configured": true, 00:15:25.428 "data_offset": 2048, 00:15:25.428 "data_size": 63488 00:15:25.428 }, 00:15:25.428 { 00:15:25.428 "name": "BaseBdev3", 00:15:25.428 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:25.428 "is_configured": true, 00:15:25.428 "data_offset": 2048, 00:15:25.428 "data_size": 63488 00:15:25.428 } 00:15:25.428 ] 00:15:25.428 }' 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.428 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.428 [2024-11-26 21:22:43.542365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.688 [2024-11-26 21:22:43.616036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.688 [2024-11-26 21:22:43.616157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.688 [2024-11-26 21:22:43.616194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.688 [2024-11-26 21:22:43.616218] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.688 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.689 "name": "raid_bdev1", 00:15:25.689 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:25.689 "strip_size_kb": 64, 00:15:25.689 "state": "online", 00:15:25.689 "raid_level": "raid5f", 00:15:25.689 "superblock": true, 00:15:25.689 "num_base_bdevs": 3, 00:15:25.689 "num_base_bdevs_discovered": 2, 00:15:25.689 "num_base_bdevs_operational": 2, 00:15:25.689 "base_bdevs_list": [ 00:15:25.689 { 00:15:25.689 "name": null, 00:15:25.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.689 "is_configured": false, 00:15:25.689 "data_offset": 0, 00:15:25.689 "data_size": 63488 00:15:25.689 }, 00:15:25.689 { 00:15:25.689 "name": "BaseBdev2", 00:15:25.689 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:25.689 "is_configured": true, 00:15:25.689 "data_offset": 2048, 00:15:25.689 "data_size": 63488 00:15:25.689 }, 00:15:25.689 { 00:15:25.689 "name": "BaseBdev3", 00:15:25.689 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:25.689 "is_configured": true, 00:15:25.689 "data_offset": 2048, 00:15:25.689 "data_size": 63488 00:15:25.689 } 00:15:25.689 ] 00:15:25.689 }' 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.689 21:22:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.259 21:22:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.259 21:22:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.259 21:22:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.259 [2024-11-26 21:22:44.119568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.259 [2024-11-26 21:22:44.119621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.259 [2024-11-26 21:22:44.119642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:26.259 [2024-11-26 21:22:44.119657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.259 [2024-11-26 21:22:44.120177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.259 [2024-11-26 21:22:44.120200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.259 [2024-11-26 21:22:44.120284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:26.259 [2024-11-26 21:22:44.120308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.259 [2024-11-26 21:22:44.120316] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:26.259 [2024-11-26 21:22:44.120339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.259 [2024-11-26 21:22:44.134044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:26.259 spare 00:15:26.259 21:22:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.259 21:22:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:26.259 [2024-11-26 21:22:44.140760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.268 "name": "raid_bdev1", 00:15:27.268 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:27.268 "strip_size_kb": 64, 00:15:27.268 "state": "online", 00:15:27.268 "raid_level": "raid5f", 00:15:27.268 "superblock": true, 00:15:27.268 "num_base_bdevs": 3, 00:15:27.268 "num_base_bdevs_discovered": 3, 00:15:27.268 "num_base_bdevs_operational": 3, 00:15:27.268 "process": { 00:15:27.268 "type": "rebuild", 00:15:27.268 "target": "spare", 00:15:27.268 "progress": { 00:15:27.268 "blocks": 20480, 00:15:27.268 "percent": 16 00:15:27.268 } 00:15:27.268 }, 00:15:27.268 "base_bdevs_list": [ 00:15:27.268 { 00:15:27.268 "name": "spare", 00:15:27.268 "uuid": "4eafcf66-7450-5daf-ad1d-b29153ea3d07", 00:15:27.268 "is_configured": true, 00:15:27.268 "data_offset": 2048, 00:15:27.268 "data_size": 63488 00:15:27.268 }, 00:15:27.268 { 00:15:27.268 "name": "BaseBdev2", 00:15:27.268 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:27.268 "is_configured": true, 00:15:27.268 "data_offset": 2048, 00:15:27.268 "data_size": 63488 00:15:27.268 }, 00:15:27.268 { 00:15:27.268 "name": "BaseBdev3", 00:15:27.268 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:27.268 "is_configured": true, 00:15:27.268 "data_offset": 2048, 00:15:27.268 "data_size": 63488 00:15:27.268 } 00:15:27.268 ] 00:15:27.268 }' 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.268 [2024-11-26 21:22:45.287632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.268 [2024-11-26 21:22:45.349195] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.268 [2024-11-26 21:22:45.349243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.268 [2024-11-26 21:22:45.349260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.268 [2024-11-26 21:22:45.349267] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.268 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.528 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.528 "name": "raid_bdev1", 00:15:27.528 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:27.528 "strip_size_kb": 64, 00:15:27.528 "state": "online", 00:15:27.528 "raid_level": "raid5f", 00:15:27.528 "superblock": true, 00:15:27.528 "num_base_bdevs": 3, 00:15:27.528 "num_base_bdevs_discovered": 2, 00:15:27.528 "num_base_bdevs_operational": 2, 00:15:27.528 "base_bdevs_list": [ 00:15:27.528 { 00:15:27.528 "name": null, 00:15:27.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.528 "is_configured": false, 00:15:27.528 "data_offset": 0, 00:15:27.528 "data_size": 63488 00:15:27.528 }, 00:15:27.528 { 00:15:27.528 "name": "BaseBdev2", 00:15:27.528 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:27.528 "is_configured": true, 00:15:27.528 "data_offset": 2048, 00:15:27.528 "data_size": 63488 00:15:27.528 }, 00:15:27.528 { 00:15:27.528 "name": "BaseBdev3", 00:15:27.528 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:27.528 "is_configured": true, 00:15:27.528 "data_offset": 2048, 00:15:27.528 "data_size": 63488 00:15:27.528 } 00:15:27.528 ] 00:15:27.528 }' 00:15:27.528 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.528 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.788 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.788 "name": "raid_bdev1", 00:15:27.788 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:27.788 "strip_size_kb": 64, 00:15:27.788 "state": "online", 00:15:27.788 "raid_level": "raid5f", 00:15:27.788 "superblock": true, 00:15:27.788 "num_base_bdevs": 3, 00:15:27.788 "num_base_bdevs_discovered": 2, 00:15:27.788 "num_base_bdevs_operational": 2, 00:15:27.788 "base_bdevs_list": [ 00:15:27.788 { 00:15:27.788 "name": null, 00:15:27.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.788 "is_configured": false, 00:15:27.788 "data_offset": 0, 00:15:27.788 "data_size": 63488 00:15:27.788 }, 00:15:27.788 { 00:15:27.788 "name": "BaseBdev2", 00:15:27.788 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:27.788 "is_configured": true, 00:15:27.788 "data_offset": 2048, 00:15:27.788 "data_size": 63488 00:15:27.788 }, 00:15:27.788 { 00:15:27.788 "name": "BaseBdev3", 00:15:27.788 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:27.788 "is_configured": true, 00:15:27.788 "data_offset": 2048, 00:15:27.789 "data_size": 63488 00:15:27.789 } 00:15:27.789 ] 00:15:27.789 }' 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.789 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 [2024-11-26 21:22:45.944477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.049 [2024-11-26 21:22:45.944528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.049 [2024-11-26 21:22:45.944554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:28.049 [2024-11-26 21:22:45.944564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.049 [2024-11-26 21:22:45.945083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.049 [2024-11-26 21:22:45.945102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.049 [2024-11-26 21:22:45.945186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:28.049 [2024-11-26 21:22:45.945207] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:28.049 [2024-11-26 21:22:45.945234] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:28.049 [2024-11-26 21:22:45.945246] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:28.049 BaseBdev1 00:15:28.049 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.049 21:22:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.988 21:22:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.988 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.988 "name": "raid_bdev1", 00:15:28.988 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:28.988 "strip_size_kb": 64, 00:15:28.988 "state": "online", 00:15:28.988 "raid_level": "raid5f", 00:15:28.988 "superblock": true, 00:15:28.988 "num_base_bdevs": 3, 00:15:28.988 "num_base_bdevs_discovered": 2, 00:15:28.988 "num_base_bdevs_operational": 2, 00:15:28.988 "base_bdevs_list": [ 00:15:28.988 { 00:15:28.988 "name": null, 00:15:28.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.988 "is_configured": false, 00:15:28.988 "data_offset": 0, 00:15:28.988 "data_size": 63488 00:15:28.988 }, 00:15:28.988 { 00:15:28.988 "name": "BaseBdev2", 00:15:28.988 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:28.988 "is_configured": true, 00:15:28.988 "data_offset": 2048, 00:15:28.988 "data_size": 63488 00:15:28.988 }, 00:15:28.988 { 00:15:28.988 "name": "BaseBdev3", 00:15:28.988 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:28.988 "is_configured": true, 00:15:28.988 "data_offset": 2048, 00:15:28.988 "data_size": 63488 00:15:28.988 } 00:15:28.988 ] 00:15:28.988 }' 00:15:28.988 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.989 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.248 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.508 "name": "raid_bdev1", 00:15:29.508 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:29.508 "strip_size_kb": 64, 00:15:29.508 "state": "online", 00:15:29.508 "raid_level": "raid5f", 00:15:29.508 "superblock": true, 00:15:29.508 "num_base_bdevs": 3, 00:15:29.508 "num_base_bdevs_discovered": 2, 00:15:29.508 "num_base_bdevs_operational": 2, 00:15:29.508 "base_bdevs_list": [ 00:15:29.508 { 00:15:29.508 "name": null, 00:15:29.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.508 "is_configured": false, 00:15:29.508 "data_offset": 0, 00:15:29.508 "data_size": 63488 00:15:29.508 }, 00:15:29.508 { 00:15:29.508 "name": "BaseBdev2", 00:15:29.508 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:29.508 "is_configured": true, 00:15:29.508 "data_offset": 2048, 00:15:29.508 "data_size": 63488 00:15:29.508 }, 00:15:29.508 { 00:15:29.508 "name": "BaseBdev3", 00:15:29.508 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:29.508 "is_configured": true, 00:15:29.508 "data_offset": 2048, 00:15:29.508 "data_size": 63488 00:15:29.508 } 00:15:29.508 ] 00:15:29.508 }' 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.508 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.509 [2024-11-26 21:22:47.538017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.509 [2024-11-26 21:22:47.538199] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.509 [2024-11-26 21:22:47.538260] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.509 request: 00:15:29.509 { 00:15:29.509 "base_bdev": "BaseBdev1", 00:15:29.509 "raid_bdev": "raid_bdev1", 00:15:29.509 "method": "bdev_raid_add_base_bdev", 00:15:29.509 "req_id": 1 00:15:29.509 } 00:15:29.509 Got JSON-RPC error response 00:15:29.509 response: 00:15:29.509 { 00:15:29.509 "code": -22, 00:15:29.509 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.509 } 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.509 21:22:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.449 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.709 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.709 "name": "raid_bdev1", 00:15:30.709 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:30.709 "strip_size_kb": 64, 00:15:30.709 "state": "online", 00:15:30.709 "raid_level": "raid5f", 00:15:30.709 "superblock": true, 00:15:30.709 "num_base_bdevs": 3, 00:15:30.709 "num_base_bdevs_discovered": 2, 00:15:30.709 "num_base_bdevs_operational": 2, 00:15:30.709 "base_bdevs_list": [ 00:15:30.709 { 00:15:30.709 "name": null, 00:15:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.709 "is_configured": false, 00:15:30.709 "data_offset": 0, 00:15:30.709 "data_size": 63488 00:15:30.709 }, 00:15:30.709 { 00:15:30.709 "name": "BaseBdev2", 00:15:30.709 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:30.709 "is_configured": true, 00:15:30.709 "data_offset": 2048, 00:15:30.709 "data_size": 63488 00:15:30.709 }, 00:15:30.709 { 00:15:30.709 "name": "BaseBdev3", 00:15:30.709 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:30.709 "is_configured": true, 00:15:30.709 "data_offset": 2048, 00:15:30.709 "data_size": 63488 00:15:30.709 } 00:15:30.709 ] 00:15:30.709 }' 00:15:30.709 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.709 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.969 "name": "raid_bdev1", 00:15:30.969 "uuid": "54f14dfc-baf4-4ea6-b1ad-be720d08f866", 00:15:30.969 "strip_size_kb": 64, 00:15:30.969 "state": "online", 00:15:30.969 "raid_level": "raid5f", 00:15:30.969 "superblock": true, 00:15:30.969 "num_base_bdevs": 3, 00:15:30.969 "num_base_bdevs_discovered": 2, 00:15:30.969 "num_base_bdevs_operational": 2, 00:15:30.969 "base_bdevs_list": [ 00:15:30.969 { 00:15:30.969 "name": null, 00:15:30.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.969 "is_configured": false, 00:15:30.969 "data_offset": 0, 00:15:30.969 "data_size": 63488 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": "BaseBdev2", 00:15:30.969 "uuid": "807670c5-e17d-5eff-a537-c9067f2fb808", 00:15:30.969 "is_configured": true, 00:15:30.969 "data_offset": 2048, 00:15:30.969 "data_size": 63488 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": "BaseBdev3", 00:15:30.969 "uuid": "c91750e7-f9d7-56b3-b386-0c95cb636897", 00:15:30.969 "is_configured": true, 00:15:30.969 "data_offset": 2048, 00:15:30.969 "data_size": 63488 00:15:30.969 } 00:15:30.969 ] 00:15:30.969 }' 00:15:30.969 21:22:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.969 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.969 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81809 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81809 ']' 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81809 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81809 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.970 killing process with pid 81809 00:15:30.970 Received shutdown signal, test time was about 60.000000 seconds 00:15:30.970 00:15:30.970 Latency(us) 00:15:30.970 [2024-11-26T21:22:49.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.970 [2024-11-26T21:22:49.126Z] =================================================================================================================== 00:15:30.970 [2024-11-26T21:22:49.126Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81809' 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81809 00:15:30.970 [2024-11-26 21:22:49.117870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.970 [2024-11-26 21:22:49.117969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.970 [2024-11-26 21:22:49.118021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.970 [2024-11-26 21:22:49.118033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:30.970 21:22:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81809 00:15:31.542 [2024-11-26 21:22:49.523997] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.923 21:22:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.923 00:15:32.923 real 0m23.150s 00:15:32.923 user 0m29.478s 00:15:32.923 sys 0m2.740s 00:15:32.923 21:22:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.923 ************************************ 00:15:32.923 END TEST raid5f_rebuild_test_sb 00:15:32.923 ************************************ 00:15:32.923 21:22:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.923 21:22:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:32.923 21:22:50 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:32.923 21:22:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.923 21:22:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.923 21:22:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.923 ************************************ 00:15:32.923 START TEST raid5f_state_function_test 00:15:32.923 ************************************ 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82564 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82564' 00:15:32.923 Process raid pid: 82564 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82564 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82564 ']' 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.923 21:22:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.923 [2024-11-26 21:22:50.850244] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:32.923 [2024-11-26 21:22:50.850465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.923 [2024-11-26 21:22:51.025662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.183 [2024-11-26 21:22:51.157400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.443 [2024-11-26 21:22:51.385986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.443 [2024-11-26 21:22:51.386128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.703 [2024-11-26 21:22:51.656991] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.703 [2024-11-26 21:22:51.657052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.703 [2024-11-26 21:22:51.657062] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.703 [2024-11-26 21:22:51.657071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.703 [2024-11-26 21:22:51.657077] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.703 [2024-11-26 21:22:51.657085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.703 [2024-11-26 21:22:51.657090] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.703 [2024-11-26 21:22:51.657099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.703 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.704 "name": "Existed_Raid", 00:15:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.704 "strip_size_kb": 64, 00:15:33.704 "state": "configuring", 00:15:33.704 "raid_level": "raid5f", 00:15:33.704 "superblock": false, 00:15:33.704 "num_base_bdevs": 4, 00:15:33.704 "num_base_bdevs_discovered": 0, 00:15:33.704 "num_base_bdevs_operational": 4, 00:15:33.704 "base_bdevs_list": [ 00:15:33.704 { 00:15:33.704 "name": "BaseBdev1", 00:15:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.704 "is_configured": false, 00:15:33.704 "data_offset": 0, 00:15:33.704 "data_size": 0 00:15:33.704 }, 00:15:33.704 { 00:15:33.704 "name": "BaseBdev2", 00:15:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.704 "is_configured": false, 00:15:33.704 "data_offset": 0, 00:15:33.704 "data_size": 0 00:15:33.704 }, 00:15:33.704 { 00:15:33.704 "name": "BaseBdev3", 00:15:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.704 "is_configured": false, 00:15:33.704 "data_offset": 0, 00:15:33.704 "data_size": 0 00:15:33.704 }, 00:15:33.704 { 00:15:33.704 "name": "BaseBdev4", 00:15:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.704 "is_configured": false, 00:15:33.704 "data_offset": 0, 00:15:33.704 "data_size": 0 00:15:33.704 } 00:15:33.704 ] 00:15:33.704 }' 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.704 21:22:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.964 [2024-11-26 21:22:52.072200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.964 [2024-11-26 21:22:52.072307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.964 [2024-11-26 21:22:52.084179] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.964 [2024-11-26 21:22:52.084257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.964 [2024-11-26 21:22:52.084282] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.964 [2024-11-26 21:22:52.084304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.964 [2024-11-26 21:22:52.084320] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.964 [2024-11-26 21:22:52.084340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.964 [2024-11-26 21:22:52.084356] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.964 [2024-11-26 21:22:52.084376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.964 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.224 [2024-11-26 21:22:52.136741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.224 BaseBdev1 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.224 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 [ 00:15:34.225 { 00:15:34.225 "name": "BaseBdev1", 00:15:34.225 "aliases": [ 00:15:34.225 "e64eb150-8e3a-48d4-bf84-574efc8bbefd" 00:15:34.225 ], 00:15:34.225 "product_name": "Malloc disk", 00:15:34.225 "block_size": 512, 00:15:34.225 "num_blocks": 65536, 00:15:34.225 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:34.225 "assigned_rate_limits": { 00:15:34.225 "rw_ios_per_sec": 0, 00:15:34.225 "rw_mbytes_per_sec": 0, 00:15:34.225 "r_mbytes_per_sec": 0, 00:15:34.225 "w_mbytes_per_sec": 0 00:15:34.225 }, 00:15:34.225 "claimed": true, 00:15:34.225 "claim_type": "exclusive_write", 00:15:34.225 "zoned": false, 00:15:34.225 "supported_io_types": { 00:15:34.225 "read": true, 00:15:34.225 "write": true, 00:15:34.225 "unmap": true, 00:15:34.225 "flush": true, 00:15:34.225 "reset": true, 00:15:34.225 "nvme_admin": false, 00:15:34.225 "nvme_io": false, 00:15:34.225 "nvme_io_md": false, 00:15:34.225 "write_zeroes": true, 00:15:34.225 "zcopy": true, 00:15:34.225 "get_zone_info": false, 00:15:34.225 "zone_management": false, 00:15:34.225 "zone_append": false, 00:15:34.225 "compare": false, 00:15:34.225 "compare_and_write": false, 00:15:34.225 "abort": true, 00:15:34.225 "seek_hole": false, 00:15:34.225 "seek_data": false, 00:15:34.225 "copy": true, 00:15:34.225 "nvme_iov_md": false 00:15:34.225 }, 00:15:34.225 "memory_domains": [ 00:15:34.225 { 00:15:34.225 "dma_device_id": "system", 00:15:34.225 "dma_device_type": 1 00:15:34.225 }, 00:15:34.225 { 00:15:34.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.225 "dma_device_type": 2 00:15:34.225 } 00:15:34.225 ], 00:15:34.225 "driver_specific": {} 00:15:34.225 } 00:15:34.225 ] 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.225 "name": "Existed_Raid", 00:15:34.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.225 "strip_size_kb": 64, 00:15:34.225 "state": "configuring", 00:15:34.225 "raid_level": "raid5f", 00:15:34.225 "superblock": false, 00:15:34.225 "num_base_bdevs": 4, 00:15:34.225 "num_base_bdevs_discovered": 1, 00:15:34.225 "num_base_bdevs_operational": 4, 00:15:34.225 "base_bdevs_list": [ 00:15:34.225 { 00:15:34.225 "name": "BaseBdev1", 00:15:34.225 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:34.225 "is_configured": true, 00:15:34.225 "data_offset": 0, 00:15:34.225 "data_size": 65536 00:15:34.225 }, 00:15:34.225 { 00:15:34.225 "name": "BaseBdev2", 00:15:34.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.225 "is_configured": false, 00:15:34.225 "data_offset": 0, 00:15:34.225 "data_size": 0 00:15:34.225 }, 00:15:34.225 { 00:15:34.225 "name": "BaseBdev3", 00:15:34.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.225 "is_configured": false, 00:15:34.225 "data_offset": 0, 00:15:34.225 "data_size": 0 00:15:34.225 }, 00:15:34.225 { 00:15:34.225 "name": "BaseBdev4", 00:15:34.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.225 "is_configured": false, 00:15:34.225 "data_offset": 0, 00:15:34.225 "data_size": 0 00:15:34.225 } 00:15:34.225 ] 00:15:34.225 }' 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.225 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.485 [2024-11-26 21:22:52.588061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.485 [2024-11-26 21:22:52.588151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.485 [2024-11-26 21:22:52.600124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.485 [2024-11-26 21:22:52.602097] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.485 [2024-11-26 21:22:52.602135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.485 [2024-11-26 21:22:52.602144] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.485 [2024-11-26 21:22:52.602154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.485 [2024-11-26 21:22:52.602160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.485 [2024-11-26 21:22:52.602168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.485 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.746 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.746 "name": "Existed_Raid", 00:15:34.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.746 "strip_size_kb": 64, 00:15:34.746 "state": "configuring", 00:15:34.746 "raid_level": "raid5f", 00:15:34.746 "superblock": false, 00:15:34.746 "num_base_bdevs": 4, 00:15:34.746 "num_base_bdevs_discovered": 1, 00:15:34.746 "num_base_bdevs_operational": 4, 00:15:34.746 "base_bdevs_list": [ 00:15:34.746 { 00:15:34.746 "name": "BaseBdev1", 00:15:34.746 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:34.746 "is_configured": true, 00:15:34.746 "data_offset": 0, 00:15:34.746 "data_size": 65536 00:15:34.746 }, 00:15:34.746 { 00:15:34.746 "name": "BaseBdev2", 00:15:34.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.746 "is_configured": false, 00:15:34.746 "data_offset": 0, 00:15:34.746 "data_size": 0 00:15:34.746 }, 00:15:34.746 { 00:15:34.746 "name": "BaseBdev3", 00:15:34.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.746 "is_configured": false, 00:15:34.746 "data_offset": 0, 00:15:34.746 "data_size": 0 00:15:34.746 }, 00:15:34.746 { 00:15:34.746 "name": "BaseBdev4", 00:15:34.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.746 "is_configured": false, 00:15:34.746 "data_offset": 0, 00:15:34.746 "data_size": 0 00:15:34.746 } 00:15:34.746 ] 00:15:34.746 }' 00:15:34.746 21:22:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.746 21:22:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 [2024-11-26 21:22:53.086892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.006 BaseBdev2 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 [ 00:15:35.006 { 00:15:35.006 "name": "BaseBdev2", 00:15:35.006 "aliases": [ 00:15:35.006 "cca82751-bcf8-4be5-81ef-cca82cc068f4" 00:15:35.006 ], 00:15:35.006 "product_name": "Malloc disk", 00:15:35.006 "block_size": 512, 00:15:35.006 "num_blocks": 65536, 00:15:35.006 "uuid": "cca82751-bcf8-4be5-81ef-cca82cc068f4", 00:15:35.006 "assigned_rate_limits": { 00:15:35.006 "rw_ios_per_sec": 0, 00:15:35.006 "rw_mbytes_per_sec": 0, 00:15:35.006 "r_mbytes_per_sec": 0, 00:15:35.006 "w_mbytes_per_sec": 0 00:15:35.006 }, 00:15:35.006 "claimed": true, 00:15:35.006 "claim_type": "exclusive_write", 00:15:35.006 "zoned": false, 00:15:35.006 "supported_io_types": { 00:15:35.006 "read": true, 00:15:35.006 "write": true, 00:15:35.006 "unmap": true, 00:15:35.006 "flush": true, 00:15:35.006 "reset": true, 00:15:35.006 "nvme_admin": false, 00:15:35.006 "nvme_io": false, 00:15:35.006 "nvme_io_md": false, 00:15:35.006 "write_zeroes": true, 00:15:35.006 "zcopy": true, 00:15:35.006 "get_zone_info": false, 00:15:35.006 "zone_management": false, 00:15:35.006 "zone_append": false, 00:15:35.006 "compare": false, 00:15:35.006 "compare_and_write": false, 00:15:35.006 "abort": true, 00:15:35.006 "seek_hole": false, 00:15:35.006 "seek_data": false, 00:15:35.006 "copy": true, 00:15:35.006 "nvme_iov_md": false 00:15:35.006 }, 00:15:35.006 "memory_domains": [ 00:15:35.006 { 00:15:35.006 "dma_device_id": "system", 00:15:35.006 "dma_device_type": 1 00:15:35.006 }, 00:15:35.006 { 00:15:35.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.006 "dma_device_type": 2 00:15:35.006 } 00:15:35.006 ], 00:15:35.006 "driver_specific": {} 00:15:35.006 } 00:15:35.006 ] 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.006 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.266 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.266 "name": "Existed_Raid", 00:15:35.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.266 "strip_size_kb": 64, 00:15:35.266 "state": "configuring", 00:15:35.266 "raid_level": "raid5f", 00:15:35.266 "superblock": false, 00:15:35.266 "num_base_bdevs": 4, 00:15:35.266 "num_base_bdevs_discovered": 2, 00:15:35.266 "num_base_bdevs_operational": 4, 00:15:35.266 "base_bdevs_list": [ 00:15:35.266 { 00:15:35.266 "name": "BaseBdev1", 00:15:35.266 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:35.266 "is_configured": true, 00:15:35.266 "data_offset": 0, 00:15:35.266 "data_size": 65536 00:15:35.266 }, 00:15:35.266 { 00:15:35.266 "name": "BaseBdev2", 00:15:35.266 "uuid": "cca82751-bcf8-4be5-81ef-cca82cc068f4", 00:15:35.266 "is_configured": true, 00:15:35.266 "data_offset": 0, 00:15:35.266 "data_size": 65536 00:15:35.266 }, 00:15:35.266 { 00:15:35.266 "name": "BaseBdev3", 00:15:35.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.266 "is_configured": false, 00:15:35.266 "data_offset": 0, 00:15:35.266 "data_size": 0 00:15:35.266 }, 00:15:35.266 { 00:15:35.266 "name": "BaseBdev4", 00:15:35.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.266 "is_configured": false, 00:15:35.266 "data_offset": 0, 00:15:35.266 "data_size": 0 00:15:35.266 } 00:15:35.266 ] 00:15:35.266 }' 00:15:35.266 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.266 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.526 [2024-11-26 21:22:53.609792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.526 BaseBdev3 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.526 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.526 [ 00:15:35.526 { 00:15:35.526 "name": "BaseBdev3", 00:15:35.526 "aliases": [ 00:15:35.526 "2fbe461a-3eb4-4ec0-93c7-eec3df7ca21b" 00:15:35.526 ], 00:15:35.526 "product_name": "Malloc disk", 00:15:35.526 "block_size": 512, 00:15:35.526 "num_blocks": 65536, 00:15:35.526 "uuid": "2fbe461a-3eb4-4ec0-93c7-eec3df7ca21b", 00:15:35.526 "assigned_rate_limits": { 00:15:35.526 "rw_ios_per_sec": 0, 00:15:35.526 "rw_mbytes_per_sec": 0, 00:15:35.526 "r_mbytes_per_sec": 0, 00:15:35.526 "w_mbytes_per_sec": 0 00:15:35.526 }, 00:15:35.526 "claimed": true, 00:15:35.526 "claim_type": "exclusive_write", 00:15:35.526 "zoned": false, 00:15:35.526 "supported_io_types": { 00:15:35.526 "read": true, 00:15:35.526 "write": true, 00:15:35.526 "unmap": true, 00:15:35.526 "flush": true, 00:15:35.526 "reset": true, 00:15:35.526 "nvme_admin": false, 00:15:35.526 "nvme_io": false, 00:15:35.526 "nvme_io_md": false, 00:15:35.526 "write_zeroes": true, 00:15:35.526 "zcopy": true, 00:15:35.526 "get_zone_info": false, 00:15:35.526 "zone_management": false, 00:15:35.526 "zone_append": false, 00:15:35.526 "compare": false, 00:15:35.526 "compare_and_write": false, 00:15:35.526 "abort": true, 00:15:35.526 "seek_hole": false, 00:15:35.526 "seek_data": false, 00:15:35.526 "copy": true, 00:15:35.526 "nvme_iov_md": false 00:15:35.526 }, 00:15:35.526 "memory_domains": [ 00:15:35.526 { 00:15:35.526 "dma_device_id": "system", 00:15:35.527 "dma_device_type": 1 00:15:35.527 }, 00:15:35.527 { 00:15:35.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.527 "dma_device_type": 2 00:15:35.527 } 00:15:35.527 ], 00:15:35.527 "driver_specific": {} 00:15:35.527 } 00:15:35.527 ] 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.527 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.786 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.786 "name": "Existed_Raid", 00:15:35.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.786 "strip_size_kb": 64, 00:15:35.786 "state": "configuring", 00:15:35.786 "raid_level": "raid5f", 00:15:35.786 "superblock": false, 00:15:35.786 "num_base_bdevs": 4, 00:15:35.786 "num_base_bdevs_discovered": 3, 00:15:35.786 "num_base_bdevs_operational": 4, 00:15:35.786 "base_bdevs_list": [ 00:15:35.786 { 00:15:35.786 "name": "BaseBdev1", 00:15:35.786 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:35.786 "is_configured": true, 00:15:35.786 "data_offset": 0, 00:15:35.786 "data_size": 65536 00:15:35.786 }, 00:15:35.786 { 00:15:35.786 "name": "BaseBdev2", 00:15:35.786 "uuid": "cca82751-bcf8-4be5-81ef-cca82cc068f4", 00:15:35.786 "is_configured": true, 00:15:35.786 "data_offset": 0, 00:15:35.786 "data_size": 65536 00:15:35.786 }, 00:15:35.786 { 00:15:35.786 "name": "BaseBdev3", 00:15:35.786 "uuid": "2fbe461a-3eb4-4ec0-93c7-eec3df7ca21b", 00:15:35.786 "is_configured": true, 00:15:35.786 "data_offset": 0, 00:15:35.786 "data_size": 65536 00:15:35.786 }, 00:15:35.786 { 00:15:35.786 "name": "BaseBdev4", 00:15:35.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.786 "is_configured": false, 00:15:35.786 "data_offset": 0, 00:15:35.786 "data_size": 0 00:15:35.786 } 00:15:35.786 ] 00:15:35.786 }' 00:15:35.787 21:22:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.787 21:22:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.047 [2024-11-26 21:22:54.089238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.047 [2024-11-26 21:22:54.089418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.047 [2024-11-26 21:22:54.089448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:36.047 [2024-11-26 21:22:54.089769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.047 [2024-11-26 21:22:54.096789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.047 [2024-11-26 21:22:54.096852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:36.047 [2024-11-26 21:22:54.097162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.047 BaseBdev4 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.047 [ 00:15:36.047 { 00:15:36.047 "name": "BaseBdev4", 00:15:36.047 "aliases": [ 00:15:36.047 "38a2956d-6d75-47eb-bc4c-2a8dab539b84" 00:15:36.047 ], 00:15:36.047 "product_name": "Malloc disk", 00:15:36.047 "block_size": 512, 00:15:36.047 "num_blocks": 65536, 00:15:36.047 "uuid": "38a2956d-6d75-47eb-bc4c-2a8dab539b84", 00:15:36.047 "assigned_rate_limits": { 00:15:36.047 "rw_ios_per_sec": 0, 00:15:36.047 "rw_mbytes_per_sec": 0, 00:15:36.047 "r_mbytes_per_sec": 0, 00:15:36.047 "w_mbytes_per_sec": 0 00:15:36.047 }, 00:15:36.047 "claimed": true, 00:15:36.047 "claim_type": "exclusive_write", 00:15:36.047 "zoned": false, 00:15:36.047 "supported_io_types": { 00:15:36.047 "read": true, 00:15:36.047 "write": true, 00:15:36.047 "unmap": true, 00:15:36.047 "flush": true, 00:15:36.047 "reset": true, 00:15:36.047 "nvme_admin": false, 00:15:36.047 "nvme_io": false, 00:15:36.047 "nvme_io_md": false, 00:15:36.047 "write_zeroes": true, 00:15:36.047 "zcopy": true, 00:15:36.047 "get_zone_info": false, 00:15:36.047 "zone_management": false, 00:15:36.047 "zone_append": false, 00:15:36.047 "compare": false, 00:15:36.047 "compare_and_write": false, 00:15:36.047 "abort": true, 00:15:36.047 "seek_hole": false, 00:15:36.047 "seek_data": false, 00:15:36.047 "copy": true, 00:15:36.047 "nvme_iov_md": false 00:15:36.047 }, 00:15:36.047 "memory_domains": [ 00:15:36.047 { 00:15:36.047 "dma_device_id": "system", 00:15:36.047 "dma_device_type": 1 00:15:36.047 }, 00:15:36.047 { 00:15:36.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.047 "dma_device_type": 2 00:15:36.047 } 00:15:36.047 ], 00:15:36.047 "driver_specific": {} 00:15:36.047 } 00:15:36.047 ] 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.047 "name": "Existed_Raid", 00:15:36.047 "uuid": "60357bed-6441-4c69-b256-b1ce9a6e9e12", 00:15:36.047 "strip_size_kb": 64, 00:15:36.047 "state": "online", 00:15:36.047 "raid_level": "raid5f", 00:15:36.047 "superblock": false, 00:15:36.047 "num_base_bdevs": 4, 00:15:36.047 "num_base_bdevs_discovered": 4, 00:15:36.047 "num_base_bdevs_operational": 4, 00:15:36.047 "base_bdevs_list": [ 00:15:36.047 { 00:15:36.047 "name": "BaseBdev1", 00:15:36.047 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:36.047 "is_configured": true, 00:15:36.047 "data_offset": 0, 00:15:36.047 "data_size": 65536 00:15:36.047 }, 00:15:36.047 { 00:15:36.047 "name": "BaseBdev2", 00:15:36.047 "uuid": "cca82751-bcf8-4be5-81ef-cca82cc068f4", 00:15:36.047 "is_configured": true, 00:15:36.047 "data_offset": 0, 00:15:36.047 "data_size": 65536 00:15:36.047 }, 00:15:36.047 { 00:15:36.047 "name": "BaseBdev3", 00:15:36.047 "uuid": "2fbe461a-3eb4-4ec0-93c7-eec3df7ca21b", 00:15:36.047 "is_configured": true, 00:15:36.047 "data_offset": 0, 00:15:36.047 "data_size": 65536 00:15:36.047 }, 00:15:36.047 { 00:15:36.047 "name": "BaseBdev4", 00:15:36.047 "uuid": "38a2956d-6d75-47eb-bc4c-2a8dab539b84", 00:15:36.047 "is_configured": true, 00:15:36.047 "data_offset": 0, 00:15:36.047 "data_size": 65536 00:15:36.047 } 00:15:36.047 ] 00:15:36.047 }' 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.047 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.616 [2024-11-26 21:22:54.625397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.616 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.616 "name": "Existed_Raid", 00:15:36.616 "aliases": [ 00:15:36.616 "60357bed-6441-4c69-b256-b1ce9a6e9e12" 00:15:36.616 ], 00:15:36.616 "product_name": "Raid Volume", 00:15:36.616 "block_size": 512, 00:15:36.616 "num_blocks": 196608, 00:15:36.616 "uuid": "60357bed-6441-4c69-b256-b1ce9a6e9e12", 00:15:36.616 "assigned_rate_limits": { 00:15:36.616 "rw_ios_per_sec": 0, 00:15:36.616 "rw_mbytes_per_sec": 0, 00:15:36.616 "r_mbytes_per_sec": 0, 00:15:36.616 "w_mbytes_per_sec": 0 00:15:36.616 }, 00:15:36.616 "claimed": false, 00:15:36.616 "zoned": false, 00:15:36.616 "supported_io_types": { 00:15:36.616 "read": true, 00:15:36.616 "write": true, 00:15:36.616 "unmap": false, 00:15:36.616 "flush": false, 00:15:36.616 "reset": true, 00:15:36.616 "nvme_admin": false, 00:15:36.616 "nvme_io": false, 00:15:36.616 "nvme_io_md": false, 00:15:36.616 "write_zeroes": true, 00:15:36.616 "zcopy": false, 00:15:36.616 "get_zone_info": false, 00:15:36.616 "zone_management": false, 00:15:36.616 "zone_append": false, 00:15:36.616 "compare": false, 00:15:36.616 "compare_and_write": false, 00:15:36.616 "abort": false, 00:15:36.616 "seek_hole": false, 00:15:36.616 "seek_data": false, 00:15:36.616 "copy": false, 00:15:36.616 "nvme_iov_md": false 00:15:36.616 }, 00:15:36.616 "driver_specific": { 00:15:36.616 "raid": { 00:15:36.616 "uuid": "60357bed-6441-4c69-b256-b1ce9a6e9e12", 00:15:36.616 "strip_size_kb": 64, 00:15:36.616 "state": "online", 00:15:36.616 "raid_level": "raid5f", 00:15:36.616 "superblock": false, 00:15:36.616 "num_base_bdevs": 4, 00:15:36.616 "num_base_bdevs_discovered": 4, 00:15:36.616 "num_base_bdevs_operational": 4, 00:15:36.616 "base_bdevs_list": [ 00:15:36.616 { 00:15:36.616 "name": "BaseBdev1", 00:15:36.616 "uuid": "e64eb150-8e3a-48d4-bf84-574efc8bbefd", 00:15:36.616 "is_configured": true, 00:15:36.616 "data_offset": 0, 00:15:36.616 "data_size": 65536 00:15:36.616 }, 00:15:36.616 { 00:15:36.616 "name": "BaseBdev2", 00:15:36.616 "uuid": "cca82751-bcf8-4be5-81ef-cca82cc068f4", 00:15:36.616 "is_configured": true, 00:15:36.616 "data_offset": 0, 00:15:36.616 "data_size": 65536 00:15:36.616 }, 00:15:36.616 { 00:15:36.616 "name": "BaseBdev3", 00:15:36.617 "uuid": "2fbe461a-3eb4-4ec0-93c7-eec3df7ca21b", 00:15:36.617 "is_configured": true, 00:15:36.617 "data_offset": 0, 00:15:36.617 "data_size": 65536 00:15:36.617 }, 00:15:36.617 { 00:15:36.617 "name": "BaseBdev4", 00:15:36.617 "uuid": "38a2956d-6d75-47eb-bc4c-2a8dab539b84", 00:15:36.617 "is_configured": true, 00:15:36.617 "data_offset": 0, 00:15:36.617 "data_size": 65536 00:15:36.617 } 00:15:36.617 ] 00:15:36.617 } 00:15:36.617 } 00:15:36.617 }' 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.617 BaseBdev2 00:15:36.617 BaseBdev3 00:15:36.617 BaseBdev4' 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.617 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.877 21:22:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.877 [2024-11-26 21:22:54.912824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.877 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.136 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.136 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.136 "name": "Existed_Raid", 00:15:37.136 "uuid": "60357bed-6441-4c69-b256-b1ce9a6e9e12", 00:15:37.136 "strip_size_kb": 64, 00:15:37.136 "state": "online", 00:15:37.136 "raid_level": "raid5f", 00:15:37.136 "superblock": false, 00:15:37.136 "num_base_bdevs": 4, 00:15:37.136 "num_base_bdevs_discovered": 3, 00:15:37.136 "num_base_bdevs_operational": 3, 00:15:37.136 "base_bdevs_list": [ 00:15:37.136 { 00:15:37.136 "name": null, 00:15:37.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.136 "is_configured": false, 00:15:37.136 "data_offset": 0, 00:15:37.136 "data_size": 65536 00:15:37.136 }, 00:15:37.136 { 00:15:37.137 "name": "BaseBdev2", 00:15:37.137 "uuid": "cca82751-bcf8-4be5-81ef-cca82cc068f4", 00:15:37.137 "is_configured": true, 00:15:37.137 "data_offset": 0, 00:15:37.137 "data_size": 65536 00:15:37.137 }, 00:15:37.137 { 00:15:37.137 "name": "BaseBdev3", 00:15:37.137 "uuid": "2fbe461a-3eb4-4ec0-93c7-eec3df7ca21b", 00:15:37.137 "is_configured": true, 00:15:37.137 "data_offset": 0, 00:15:37.137 "data_size": 65536 00:15:37.137 }, 00:15:37.137 { 00:15:37.137 "name": "BaseBdev4", 00:15:37.137 "uuid": "38a2956d-6d75-47eb-bc4c-2a8dab539b84", 00:15:37.137 "is_configured": true, 00:15:37.137 "data_offset": 0, 00:15:37.137 "data_size": 65536 00:15:37.137 } 00:15:37.137 ] 00:15:37.137 }' 00:15:37.137 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.137 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.396 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.396 [2024-11-26 21:22:55.529696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.396 [2024-11-26 21:22:55.529877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.655 [2024-11-26 21:22:55.628673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.655 [2024-11-26 21:22:55.688568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.655 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.656 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.656 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.656 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.656 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.915 [2024-11-26 21:22:55.847490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:37.915 [2024-11-26 21:22:55.847622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.915 21:22:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.915 BaseBdev2 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.175 [ 00:15:38.175 { 00:15:38.175 "name": "BaseBdev2", 00:15:38.175 "aliases": [ 00:15:38.175 "7a463bd0-97d4-4bfd-a2eb-1383e92f671d" 00:15:38.175 ], 00:15:38.175 "product_name": "Malloc disk", 00:15:38.175 "block_size": 512, 00:15:38.175 "num_blocks": 65536, 00:15:38.175 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:38.175 "assigned_rate_limits": { 00:15:38.175 "rw_ios_per_sec": 0, 00:15:38.175 "rw_mbytes_per_sec": 0, 00:15:38.175 "r_mbytes_per_sec": 0, 00:15:38.175 "w_mbytes_per_sec": 0 00:15:38.175 }, 00:15:38.175 "claimed": false, 00:15:38.175 "zoned": false, 00:15:38.175 "supported_io_types": { 00:15:38.175 "read": true, 00:15:38.175 "write": true, 00:15:38.175 "unmap": true, 00:15:38.175 "flush": true, 00:15:38.175 "reset": true, 00:15:38.175 "nvme_admin": false, 00:15:38.175 "nvme_io": false, 00:15:38.175 "nvme_io_md": false, 00:15:38.175 "write_zeroes": true, 00:15:38.175 "zcopy": true, 00:15:38.175 "get_zone_info": false, 00:15:38.175 "zone_management": false, 00:15:38.175 "zone_append": false, 00:15:38.175 "compare": false, 00:15:38.175 "compare_and_write": false, 00:15:38.175 "abort": true, 00:15:38.175 "seek_hole": false, 00:15:38.175 "seek_data": false, 00:15:38.175 "copy": true, 00:15:38.175 "nvme_iov_md": false 00:15:38.175 }, 00:15:38.175 "memory_domains": [ 00:15:38.175 { 00:15:38.175 "dma_device_id": "system", 00:15:38.175 "dma_device_type": 1 00:15:38.175 }, 00:15:38.175 { 00:15:38.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.175 "dma_device_type": 2 00:15:38.175 } 00:15:38.175 ], 00:15:38.175 "driver_specific": {} 00:15:38.175 } 00:15:38.175 ] 00:15:38.175 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.175 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.175 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 BaseBdev3 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 [ 00:15:38.176 { 00:15:38.176 "name": "BaseBdev3", 00:15:38.176 "aliases": [ 00:15:38.176 "0d8ba2c9-8748-4916-afde-52fbb07552be" 00:15:38.176 ], 00:15:38.176 "product_name": "Malloc disk", 00:15:38.176 "block_size": 512, 00:15:38.176 "num_blocks": 65536, 00:15:38.176 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:38.176 "assigned_rate_limits": { 00:15:38.176 "rw_ios_per_sec": 0, 00:15:38.176 "rw_mbytes_per_sec": 0, 00:15:38.176 "r_mbytes_per_sec": 0, 00:15:38.176 "w_mbytes_per_sec": 0 00:15:38.176 }, 00:15:38.176 "claimed": false, 00:15:38.176 "zoned": false, 00:15:38.176 "supported_io_types": { 00:15:38.176 "read": true, 00:15:38.176 "write": true, 00:15:38.176 "unmap": true, 00:15:38.176 "flush": true, 00:15:38.176 "reset": true, 00:15:38.176 "nvme_admin": false, 00:15:38.176 "nvme_io": false, 00:15:38.176 "nvme_io_md": false, 00:15:38.176 "write_zeroes": true, 00:15:38.176 "zcopy": true, 00:15:38.176 "get_zone_info": false, 00:15:38.176 "zone_management": false, 00:15:38.176 "zone_append": false, 00:15:38.176 "compare": false, 00:15:38.176 "compare_and_write": false, 00:15:38.176 "abort": true, 00:15:38.176 "seek_hole": false, 00:15:38.176 "seek_data": false, 00:15:38.176 "copy": true, 00:15:38.176 "nvme_iov_md": false 00:15:38.176 }, 00:15:38.176 "memory_domains": [ 00:15:38.176 { 00:15:38.176 "dma_device_id": "system", 00:15:38.176 "dma_device_type": 1 00:15:38.176 }, 00:15:38.176 { 00:15:38.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.176 "dma_device_type": 2 00:15:38.176 } 00:15:38.176 ], 00:15:38.176 "driver_specific": {} 00:15:38.176 } 00:15:38.176 ] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 BaseBdev4 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 [ 00:15:38.176 { 00:15:38.176 "name": "BaseBdev4", 00:15:38.176 "aliases": [ 00:15:38.176 "d42aa070-0805-4ce0-9f2b-526b2528387a" 00:15:38.176 ], 00:15:38.176 "product_name": "Malloc disk", 00:15:38.176 "block_size": 512, 00:15:38.176 "num_blocks": 65536, 00:15:38.176 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:38.176 "assigned_rate_limits": { 00:15:38.176 "rw_ios_per_sec": 0, 00:15:38.176 "rw_mbytes_per_sec": 0, 00:15:38.176 "r_mbytes_per_sec": 0, 00:15:38.176 "w_mbytes_per_sec": 0 00:15:38.176 }, 00:15:38.176 "claimed": false, 00:15:38.176 "zoned": false, 00:15:38.176 "supported_io_types": { 00:15:38.176 "read": true, 00:15:38.176 "write": true, 00:15:38.176 "unmap": true, 00:15:38.176 "flush": true, 00:15:38.176 "reset": true, 00:15:38.176 "nvme_admin": false, 00:15:38.176 "nvme_io": false, 00:15:38.176 "nvme_io_md": false, 00:15:38.176 "write_zeroes": true, 00:15:38.176 "zcopy": true, 00:15:38.176 "get_zone_info": false, 00:15:38.176 "zone_management": false, 00:15:38.176 "zone_append": false, 00:15:38.176 "compare": false, 00:15:38.176 "compare_and_write": false, 00:15:38.176 "abort": true, 00:15:38.176 "seek_hole": false, 00:15:38.176 "seek_data": false, 00:15:38.176 "copy": true, 00:15:38.176 "nvme_iov_md": false 00:15:38.176 }, 00:15:38.176 "memory_domains": [ 00:15:38.176 { 00:15:38.176 "dma_device_id": "system", 00:15:38.176 "dma_device_type": 1 00:15:38.176 }, 00:15:38.176 { 00:15:38.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.176 "dma_device_type": 2 00:15:38.176 } 00:15:38.176 ], 00:15:38.176 "driver_specific": {} 00:15:38.176 } 00:15:38.176 ] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 [2024-11-26 21:22:56.261120] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.176 [2024-11-26 21:22:56.261174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.176 [2024-11-26 21:22:56.261197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.176 [2024-11-26 21:22:56.263211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.176 [2024-11-26 21:22:56.263263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.176 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.176 "name": "Existed_Raid", 00:15:38.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.176 "strip_size_kb": 64, 00:15:38.176 "state": "configuring", 00:15:38.177 "raid_level": "raid5f", 00:15:38.177 "superblock": false, 00:15:38.177 "num_base_bdevs": 4, 00:15:38.177 "num_base_bdevs_discovered": 3, 00:15:38.177 "num_base_bdevs_operational": 4, 00:15:38.177 "base_bdevs_list": [ 00:15:38.177 { 00:15:38.177 "name": "BaseBdev1", 00:15:38.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.177 "is_configured": false, 00:15:38.177 "data_offset": 0, 00:15:38.177 "data_size": 0 00:15:38.177 }, 00:15:38.177 { 00:15:38.177 "name": "BaseBdev2", 00:15:38.177 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:38.177 "is_configured": true, 00:15:38.177 "data_offset": 0, 00:15:38.177 "data_size": 65536 00:15:38.177 }, 00:15:38.177 { 00:15:38.177 "name": "BaseBdev3", 00:15:38.177 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:38.177 "is_configured": true, 00:15:38.177 "data_offset": 0, 00:15:38.177 "data_size": 65536 00:15:38.177 }, 00:15:38.177 { 00:15:38.177 "name": "BaseBdev4", 00:15:38.177 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:38.177 "is_configured": true, 00:15:38.177 "data_offset": 0, 00:15:38.177 "data_size": 65536 00:15:38.177 } 00:15:38.177 ] 00:15:38.177 }' 00:15:38.177 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.177 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.744 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.744 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.744 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.744 [2024-11-26 21:22:56.704352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.745 "name": "Existed_Raid", 00:15:38.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.745 "strip_size_kb": 64, 00:15:38.745 "state": "configuring", 00:15:38.745 "raid_level": "raid5f", 00:15:38.745 "superblock": false, 00:15:38.745 "num_base_bdevs": 4, 00:15:38.745 "num_base_bdevs_discovered": 2, 00:15:38.745 "num_base_bdevs_operational": 4, 00:15:38.745 "base_bdevs_list": [ 00:15:38.745 { 00:15:38.745 "name": "BaseBdev1", 00:15:38.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.745 "is_configured": false, 00:15:38.745 "data_offset": 0, 00:15:38.745 "data_size": 0 00:15:38.745 }, 00:15:38.745 { 00:15:38.745 "name": null, 00:15:38.745 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:38.745 "is_configured": false, 00:15:38.745 "data_offset": 0, 00:15:38.745 "data_size": 65536 00:15:38.745 }, 00:15:38.745 { 00:15:38.745 "name": "BaseBdev3", 00:15:38.745 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:38.745 "is_configured": true, 00:15:38.745 "data_offset": 0, 00:15:38.745 "data_size": 65536 00:15:38.745 }, 00:15:38.745 { 00:15:38.745 "name": "BaseBdev4", 00:15:38.745 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:38.745 "is_configured": true, 00:15:38.745 "data_offset": 0, 00:15:38.745 "data_size": 65536 00:15:38.745 } 00:15:38.745 ] 00:15:38.745 }' 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.745 21:22:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.313 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.313 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.313 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.314 [2024-11-26 21:22:57.243286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.314 BaseBdev1 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.314 [ 00:15:39.314 { 00:15:39.314 "name": "BaseBdev1", 00:15:39.314 "aliases": [ 00:15:39.314 "9e301ad1-1dde-421d-9896-4f9f6aef345d" 00:15:39.314 ], 00:15:39.314 "product_name": "Malloc disk", 00:15:39.314 "block_size": 512, 00:15:39.314 "num_blocks": 65536, 00:15:39.314 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:39.314 "assigned_rate_limits": { 00:15:39.314 "rw_ios_per_sec": 0, 00:15:39.314 "rw_mbytes_per_sec": 0, 00:15:39.314 "r_mbytes_per_sec": 0, 00:15:39.314 "w_mbytes_per_sec": 0 00:15:39.314 }, 00:15:39.314 "claimed": true, 00:15:39.314 "claim_type": "exclusive_write", 00:15:39.314 "zoned": false, 00:15:39.314 "supported_io_types": { 00:15:39.314 "read": true, 00:15:39.314 "write": true, 00:15:39.314 "unmap": true, 00:15:39.314 "flush": true, 00:15:39.314 "reset": true, 00:15:39.314 "nvme_admin": false, 00:15:39.314 "nvme_io": false, 00:15:39.314 "nvme_io_md": false, 00:15:39.314 "write_zeroes": true, 00:15:39.314 "zcopy": true, 00:15:39.314 "get_zone_info": false, 00:15:39.314 "zone_management": false, 00:15:39.314 "zone_append": false, 00:15:39.314 "compare": false, 00:15:39.314 "compare_and_write": false, 00:15:39.314 "abort": true, 00:15:39.314 "seek_hole": false, 00:15:39.314 "seek_data": false, 00:15:39.314 "copy": true, 00:15:39.314 "nvme_iov_md": false 00:15:39.314 }, 00:15:39.314 "memory_domains": [ 00:15:39.314 { 00:15:39.314 "dma_device_id": "system", 00:15:39.314 "dma_device_type": 1 00:15:39.314 }, 00:15:39.314 { 00:15:39.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.314 "dma_device_type": 2 00:15:39.314 } 00:15:39.314 ], 00:15:39.314 "driver_specific": {} 00:15:39.314 } 00:15:39.314 ] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.314 "name": "Existed_Raid", 00:15:39.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.314 "strip_size_kb": 64, 00:15:39.314 "state": "configuring", 00:15:39.314 "raid_level": "raid5f", 00:15:39.314 "superblock": false, 00:15:39.314 "num_base_bdevs": 4, 00:15:39.314 "num_base_bdevs_discovered": 3, 00:15:39.314 "num_base_bdevs_operational": 4, 00:15:39.314 "base_bdevs_list": [ 00:15:39.314 { 00:15:39.314 "name": "BaseBdev1", 00:15:39.314 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:39.314 "is_configured": true, 00:15:39.314 "data_offset": 0, 00:15:39.314 "data_size": 65536 00:15:39.314 }, 00:15:39.314 { 00:15:39.314 "name": null, 00:15:39.314 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:39.314 "is_configured": false, 00:15:39.314 "data_offset": 0, 00:15:39.314 "data_size": 65536 00:15:39.314 }, 00:15:39.314 { 00:15:39.314 "name": "BaseBdev3", 00:15:39.314 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:39.314 "is_configured": true, 00:15:39.314 "data_offset": 0, 00:15:39.314 "data_size": 65536 00:15:39.314 }, 00:15:39.314 { 00:15:39.314 "name": "BaseBdev4", 00:15:39.314 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:39.314 "is_configured": true, 00:15:39.314 "data_offset": 0, 00:15:39.314 "data_size": 65536 00:15:39.314 } 00:15:39.314 ] 00:15:39.314 }' 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.314 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.574 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.574 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.574 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.574 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.574 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.863 [2024-11-26 21:22:57.746508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.863 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.864 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.864 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.864 "name": "Existed_Raid", 00:15:39.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.864 "strip_size_kb": 64, 00:15:39.864 "state": "configuring", 00:15:39.864 "raid_level": "raid5f", 00:15:39.864 "superblock": false, 00:15:39.864 "num_base_bdevs": 4, 00:15:39.864 "num_base_bdevs_discovered": 2, 00:15:39.864 "num_base_bdevs_operational": 4, 00:15:39.864 "base_bdevs_list": [ 00:15:39.864 { 00:15:39.864 "name": "BaseBdev1", 00:15:39.864 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:39.864 "is_configured": true, 00:15:39.864 "data_offset": 0, 00:15:39.864 "data_size": 65536 00:15:39.864 }, 00:15:39.864 { 00:15:39.864 "name": null, 00:15:39.864 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:39.864 "is_configured": false, 00:15:39.864 "data_offset": 0, 00:15:39.864 "data_size": 65536 00:15:39.864 }, 00:15:39.864 { 00:15:39.864 "name": null, 00:15:39.864 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:39.864 "is_configured": false, 00:15:39.864 "data_offset": 0, 00:15:39.864 "data_size": 65536 00:15:39.864 }, 00:15:39.864 { 00:15:39.864 "name": "BaseBdev4", 00:15:39.864 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:39.864 "is_configured": true, 00:15:39.864 "data_offset": 0, 00:15:39.864 "data_size": 65536 00:15:39.864 } 00:15:39.864 ] 00:15:39.864 }' 00:15:39.864 21:22:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.864 21:22:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.123 [2024-11-26 21:22:58.189734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.123 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.123 "name": "Existed_Raid", 00:15:40.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.123 "strip_size_kb": 64, 00:15:40.123 "state": "configuring", 00:15:40.123 "raid_level": "raid5f", 00:15:40.124 "superblock": false, 00:15:40.124 "num_base_bdevs": 4, 00:15:40.124 "num_base_bdevs_discovered": 3, 00:15:40.124 "num_base_bdevs_operational": 4, 00:15:40.124 "base_bdevs_list": [ 00:15:40.124 { 00:15:40.124 "name": "BaseBdev1", 00:15:40.124 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:40.124 "is_configured": true, 00:15:40.124 "data_offset": 0, 00:15:40.124 "data_size": 65536 00:15:40.124 }, 00:15:40.124 { 00:15:40.124 "name": null, 00:15:40.124 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:40.124 "is_configured": false, 00:15:40.124 "data_offset": 0, 00:15:40.124 "data_size": 65536 00:15:40.124 }, 00:15:40.124 { 00:15:40.124 "name": "BaseBdev3", 00:15:40.124 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:40.124 "is_configured": true, 00:15:40.124 "data_offset": 0, 00:15:40.124 "data_size": 65536 00:15:40.124 }, 00:15:40.124 { 00:15:40.124 "name": "BaseBdev4", 00:15:40.124 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:40.124 "is_configured": true, 00:15:40.124 "data_offset": 0, 00:15:40.124 "data_size": 65536 00:15:40.124 } 00:15:40.124 ] 00:15:40.124 }' 00:15:40.124 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.124 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.692 [2024-11-26 21:22:58.688943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.692 "name": "Existed_Raid", 00:15:40.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.692 "strip_size_kb": 64, 00:15:40.692 "state": "configuring", 00:15:40.692 "raid_level": "raid5f", 00:15:40.692 "superblock": false, 00:15:40.692 "num_base_bdevs": 4, 00:15:40.692 "num_base_bdevs_discovered": 2, 00:15:40.692 "num_base_bdevs_operational": 4, 00:15:40.692 "base_bdevs_list": [ 00:15:40.692 { 00:15:40.692 "name": null, 00:15:40.692 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:40.692 "is_configured": false, 00:15:40.692 "data_offset": 0, 00:15:40.692 "data_size": 65536 00:15:40.692 }, 00:15:40.692 { 00:15:40.692 "name": null, 00:15:40.692 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:40.692 "is_configured": false, 00:15:40.692 "data_offset": 0, 00:15:40.692 "data_size": 65536 00:15:40.692 }, 00:15:40.692 { 00:15:40.692 "name": "BaseBdev3", 00:15:40.692 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:40.692 "is_configured": true, 00:15:40.692 "data_offset": 0, 00:15:40.692 "data_size": 65536 00:15:40.692 }, 00:15:40.692 { 00:15:40.692 "name": "BaseBdev4", 00:15:40.692 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:40.692 "is_configured": true, 00:15:40.692 "data_offset": 0, 00:15:40.692 "data_size": 65536 00:15:40.692 } 00:15:40.692 ] 00:15:40.692 }' 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.692 21:22:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.267 [2024-11-26 21:22:59.315408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.267 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.267 "name": "Existed_Raid", 00:15:41.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.267 "strip_size_kb": 64, 00:15:41.267 "state": "configuring", 00:15:41.267 "raid_level": "raid5f", 00:15:41.267 "superblock": false, 00:15:41.267 "num_base_bdevs": 4, 00:15:41.267 "num_base_bdevs_discovered": 3, 00:15:41.267 "num_base_bdevs_operational": 4, 00:15:41.267 "base_bdevs_list": [ 00:15:41.267 { 00:15:41.267 "name": null, 00:15:41.267 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:41.267 "is_configured": false, 00:15:41.267 "data_offset": 0, 00:15:41.267 "data_size": 65536 00:15:41.267 }, 00:15:41.267 { 00:15:41.267 "name": "BaseBdev2", 00:15:41.267 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:41.267 "is_configured": true, 00:15:41.267 "data_offset": 0, 00:15:41.267 "data_size": 65536 00:15:41.267 }, 00:15:41.267 { 00:15:41.268 "name": "BaseBdev3", 00:15:41.268 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:41.268 "is_configured": true, 00:15:41.268 "data_offset": 0, 00:15:41.268 "data_size": 65536 00:15:41.268 }, 00:15:41.268 { 00:15:41.268 "name": "BaseBdev4", 00:15:41.268 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:41.268 "is_configured": true, 00:15:41.268 "data_offset": 0, 00:15:41.268 "data_size": 65536 00:15:41.268 } 00:15:41.268 ] 00:15:41.268 }' 00:15:41.268 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.268 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9e301ad1-1dde-421d-9896-4f9f6aef345d 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.858 [2024-11-26 21:22:59.867461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.858 [2024-11-26 21:22:59.867521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.858 [2024-11-26 21:22:59.867528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:41.858 [2024-11-26 21:22:59.867795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:41.858 [2024-11-26 21:22:59.874503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.858 [2024-11-26 21:22:59.874597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:41.858 [2024-11-26 21:22:59.874866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.858 NewBaseBdev 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.858 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.858 [ 00:15:41.858 { 00:15:41.858 "name": "NewBaseBdev", 00:15:41.858 "aliases": [ 00:15:41.858 "9e301ad1-1dde-421d-9896-4f9f6aef345d" 00:15:41.858 ], 00:15:41.858 "product_name": "Malloc disk", 00:15:41.858 "block_size": 512, 00:15:41.858 "num_blocks": 65536, 00:15:41.858 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:41.858 "assigned_rate_limits": { 00:15:41.858 "rw_ios_per_sec": 0, 00:15:41.858 "rw_mbytes_per_sec": 0, 00:15:41.858 "r_mbytes_per_sec": 0, 00:15:41.858 "w_mbytes_per_sec": 0 00:15:41.858 }, 00:15:41.858 "claimed": true, 00:15:41.858 "claim_type": "exclusive_write", 00:15:41.858 "zoned": false, 00:15:41.858 "supported_io_types": { 00:15:41.858 "read": true, 00:15:41.858 "write": true, 00:15:41.858 "unmap": true, 00:15:41.858 "flush": true, 00:15:41.858 "reset": true, 00:15:41.858 "nvme_admin": false, 00:15:41.858 "nvme_io": false, 00:15:41.858 "nvme_io_md": false, 00:15:41.858 "write_zeroes": true, 00:15:41.858 "zcopy": true, 00:15:41.858 "get_zone_info": false, 00:15:41.858 "zone_management": false, 00:15:41.858 "zone_append": false, 00:15:41.858 "compare": false, 00:15:41.858 "compare_and_write": false, 00:15:41.858 "abort": true, 00:15:41.858 "seek_hole": false, 00:15:41.858 "seek_data": false, 00:15:41.858 "copy": true, 00:15:41.858 "nvme_iov_md": false 00:15:41.858 }, 00:15:41.859 "memory_domains": [ 00:15:41.859 { 00:15:41.859 "dma_device_id": "system", 00:15:41.859 "dma_device_type": 1 00:15:41.859 }, 00:15:41.859 { 00:15:41.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.859 "dma_device_type": 2 00:15:41.859 } 00:15:41.859 ], 00:15:41.859 "driver_specific": {} 00:15:41.859 } 00:15:41.859 ] 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.859 "name": "Existed_Raid", 00:15:41.859 "uuid": "60fe25fc-5702-44b9-b6f4-ee6f0dcf4f0a", 00:15:41.859 "strip_size_kb": 64, 00:15:41.859 "state": "online", 00:15:41.859 "raid_level": "raid5f", 00:15:41.859 "superblock": false, 00:15:41.859 "num_base_bdevs": 4, 00:15:41.859 "num_base_bdevs_discovered": 4, 00:15:41.859 "num_base_bdevs_operational": 4, 00:15:41.859 "base_bdevs_list": [ 00:15:41.859 { 00:15:41.859 "name": "NewBaseBdev", 00:15:41.859 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:41.859 "is_configured": true, 00:15:41.859 "data_offset": 0, 00:15:41.859 "data_size": 65536 00:15:41.859 }, 00:15:41.859 { 00:15:41.859 "name": "BaseBdev2", 00:15:41.859 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:41.859 "is_configured": true, 00:15:41.859 "data_offset": 0, 00:15:41.859 "data_size": 65536 00:15:41.859 }, 00:15:41.859 { 00:15:41.859 "name": "BaseBdev3", 00:15:41.859 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:41.859 "is_configured": true, 00:15:41.859 "data_offset": 0, 00:15:41.859 "data_size": 65536 00:15:41.859 }, 00:15:41.859 { 00:15:41.859 "name": "BaseBdev4", 00:15:41.859 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:41.859 "is_configured": true, 00:15:41.859 "data_offset": 0, 00:15:41.859 "data_size": 65536 00:15:41.859 } 00:15:41.859 ] 00:15:41.859 }' 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.859 21:22:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.429 [2024-11-26 21:23:00.331029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.429 "name": "Existed_Raid", 00:15:42.429 "aliases": [ 00:15:42.429 "60fe25fc-5702-44b9-b6f4-ee6f0dcf4f0a" 00:15:42.429 ], 00:15:42.429 "product_name": "Raid Volume", 00:15:42.429 "block_size": 512, 00:15:42.429 "num_blocks": 196608, 00:15:42.429 "uuid": "60fe25fc-5702-44b9-b6f4-ee6f0dcf4f0a", 00:15:42.429 "assigned_rate_limits": { 00:15:42.429 "rw_ios_per_sec": 0, 00:15:42.429 "rw_mbytes_per_sec": 0, 00:15:42.429 "r_mbytes_per_sec": 0, 00:15:42.429 "w_mbytes_per_sec": 0 00:15:42.429 }, 00:15:42.429 "claimed": false, 00:15:42.429 "zoned": false, 00:15:42.429 "supported_io_types": { 00:15:42.429 "read": true, 00:15:42.429 "write": true, 00:15:42.429 "unmap": false, 00:15:42.429 "flush": false, 00:15:42.429 "reset": true, 00:15:42.429 "nvme_admin": false, 00:15:42.429 "nvme_io": false, 00:15:42.429 "nvme_io_md": false, 00:15:42.429 "write_zeroes": true, 00:15:42.429 "zcopy": false, 00:15:42.429 "get_zone_info": false, 00:15:42.429 "zone_management": false, 00:15:42.429 "zone_append": false, 00:15:42.429 "compare": false, 00:15:42.429 "compare_and_write": false, 00:15:42.429 "abort": false, 00:15:42.429 "seek_hole": false, 00:15:42.429 "seek_data": false, 00:15:42.429 "copy": false, 00:15:42.429 "nvme_iov_md": false 00:15:42.429 }, 00:15:42.429 "driver_specific": { 00:15:42.429 "raid": { 00:15:42.429 "uuid": "60fe25fc-5702-44b9-b6f4-ee6f0dcf4f0a", 00:15:42.429 "strip_size_kb": 64, 00:15:42.429 "state": "online", 00:15:42.429 "raid_level": "raid5f", 00:15:42.429 "superblock": false, 00:15:42.429 "num_base_bdevs": 4, 00:15:42.429 "num_base_bdevs_discovered": 4, 00:15:42.429 "num_base_bdevs_operational": 4, 00:15:42.429 "base_bdevs_list": [ 00:15:42.429 { 00:15:42.429 "name": "NewBaseBdev", 00:15:42.429 "uuid": "9e301ad1-1dde-421d-9896-4f9f6aef345d", 00:15:42.429 "is_configured": true, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 65536 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": "BaseBdev2", 00:15:42.429 "uuid": "7a463bd0-97d4-4bfd-a2eb-1383e92f671d", 00:15:42.429 "is_configured": true, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 65536 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": "BaseBdev3", 00:15:42.429 "uuid": "0d8ba2c9-8748-4916-afde-52fbb07552be", 00:15:42.429 "is_configured": true, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 65536 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": "BaseBdev4", 00:15:42.429 "uuid": "d42aa070-0805-4ce0-9f2b-526b2528387a", 00:15:42.429 "is_configured": true, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 65536 00:15:42.429 } 00:15:42.429 ] 00:15:42.429 } 00:15:42.429 } 00:15:42.429 }' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:42.429 BaseBdev2 00:15:42.429 BaseBdev3 00:15:42.429 BaseBdev4' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.429 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.690 [2024-11-26 21:23:00.642282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.690 [2024-11-26 21:23:00.642348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.690 [2024-11-26 21:23:00.642417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.690 [2024-11-26 21:23:00.642711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.690 [2024-11-26 21:23:00.642722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82564 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82564 ']' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82564 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82564 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.690 killing process with pid 82564 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82564' 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82564 00:15:42.690 [2024-11-26 21:23:00.692088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.690 21:23:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82564 00:15:42.950 [2024-11-26 21:23:01.101979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:44.333 ************************************ 00:15:44.333 END TEST raid5f_state_function_test 00:15:44.333 ************************************ 00:15:44.333 00:15:44.333 real 0m11.523s 00:15:44.333 user 0m18.039s 00:15:44.333 sys 0m2.173s 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 21:23:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:44.333 21:23:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:44.333 21:23:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.333 21:23:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 ************************************ 00:15:44.333 START TEST raid5f_state_function_test_sb 00:15:44.333 ************************************ 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83230 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:44.333 Process raid pid: 83230 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83230' 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83230 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83230 ']' 00:15:44.333 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.334 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.334 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.334 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.334 21:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.334 [2024-11-26 21:23:02.442925] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:44.334 [2024-11-26 21:23:02.443047] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.594 [2024-11-26 21:23:02.619049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.854 [2024-11-26 21:23:02.749084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.854 [2024-11-26 21:23:02.988398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.854 [2024-11-26 21:23:02.988555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.424 [2024-11-26 21:23:03.280043] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.424 [2024-11-26 21:23:03.280104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.424 [2024-11-26 21:23:03.280121] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.424 [2024-11-26 21:23:03.280131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.424 [2024-11-26 21:23:03.280137] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.424 [2024-11-26 21:23:03.280146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.424 [2024-11-26 21:23:03.280151] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.424 [2024-11-26 21:23:03.280160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.424 "name": "Existed_Raid", 00:15:45.424 "uuid": "a85ad52d-42bf-4e7a-9b18-e7b28725a174", 00:15:45.424 "strip_size_kb": 64, 00:15:45.424 "state": "configuring", 00:15:45.424 "raid_level": "raid5f", 00:15:45.424 "superblock": true, 00:15:45.424 "num_base_bdevs": 4, 00:15:45.424 "num_base_bdevs_discovered": 0, 00:15:45.424 "num_base_bdevs_operational": 4, 00:15:45.424 "base_bdevs_list": [ 00:15:45.424 { 00:15:45.424 "name": "BaseBdev1", 00:15:45.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.424 "is_configured": false, 00:15:45.424 "data_offset": 0, 00:15:45.424 "data_size": 0 00:15:45.424 }, 00:15:45.424 { 00:15:45.424 "name": "BaseBdev2", 00:15:45.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.424 "is_configured": false, 00:15:45.424 "data_offset": 0, 00:15:45.424 "data_size": 0 00:15:45.424 }, 00:15:45.424 { 00:15:45.424 "name": "BaseBdev3", 00:15:45.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.424 "is_configured": false, 00:15:45.424 "data_offset": 0, 00:15:45.424 "data_size": 0 00:15:45.424 }, 00:15:45.424 { 00:15:45.424 "name": "BaseBdev4", 00:15:45.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.424 "is_configured": false, 00:15:45.424 "data_offset": 0, 00:15:45.424 "data_size": 0 00:15:45.424 } 00:15:45.424 ] 00:15:45.424 }' 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.424 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.684 [2024-11-26 21:23:03.751118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.684 [2024-11-26 21:23:03.751222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.684 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.684 [2024-11-26 21:23:03.763107] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.684 [2024-11-26 21:23:03.763187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.684 [2024-11-26 21:23:03.763217] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.684 [2024-11-26 21:23:03.763239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.684 [2024-11-26 21:23:03.763288] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.685 [2024-11-26 21:23:03.763309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.685 [2024-11-26 21:23:03.763347] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.685 [2024-11-26 21:23:03.763369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.685 [2024-11-26 21:23:03.813338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.685 BaseBdev1 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.685 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.685 [ 00:15:45.685 { 00:15:45.685 "name": "BaseBdev1", 00:15:45.685 "aliases": [ 00:15:45.685 "110ca095-667a-4bd2-b7a5-40feec58c44b" 00:15:45.685 ], 00:15:45.685 "product_name": "Malloc disk", 00:15:45.685 "block_size": 512, 00:15:45.685 "num_blocks": 65536, 00:15:45.685 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:45.685 "assigned_rate_limits": { 00:15:45.685 "rw_ios_per_sec": 0, 00:15:45.685 "rw_mbytes_per_sec": 0, 00:15:45.685 "r_mbytes_per_sec": 0, 00:15:45.685 "w_mbytes_per_sec": 0 00:15:45.685 }, 00:15:45.685 "claimed": true, 00:15:45.685 "claim_type": "exclusive_write", 00:15:45.685 "zoned": false, 00:15:45.685 "supported_io_types": { 00:15:45.685 "read": true, 00:15:45.685 "write": true, 00:15:45.685 "unmap": true, 00:15:45.685 "flush": true, 00:15:45.685 "reset": true, 00:15:45.685 "nvme_admin": false, 00:15:45.685 "nvme_io": false, 00:15:45.685 "nvme_io_md": false, 00:15:45.685 "write_zeroes": true, 00:15:45.685 "zcopy": true, 00:15:45.685 "get_zone_info": false, 00:15:45.685 "zone_management": false, 00:15:45.685 "zone_append": false, 00:15:45.685 "compare": false, 00:15:45.685 "compare_and_write": false, 00:15:45.685 "abort": true, 00:15:45.685 "seek_hole": false, 00:15:45.685 "seek_data": false, 00:15:45.945 "copy": true, 00:15:45.945 "nvme_iov_md": false 00:15:45.945 }, 00:15:45.945 "memory_domains": [ 00:15:45.945 { 00:15:45.945 "dma_device_id": "system", 00:15:45.945 "dma_device_type": 1 00:15:45.945 }, 00:15:45.945 { 00:15:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.945 "dma_device_type": 2 00:15:45.945 } 00:15:45.945 ], 00:15:45.945 "driver_specific": {} 00:15:45.945 } 00:15:45.945 ] 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.945 "name": "Existed_Raid", 00:15:45.945 "uuid": "18d26c2c-0e20-4578-ab21-fcd86035c291", 00:15:45.945 "strip_size_kb": 64, 00:15:45.945 "state": "configuring", 00:15:45.945 "raid_level": "raid5f", 00:15:45.945 "superblock": true, 00:15:45.945 "num_base_bdevs": 4, 00:15:45.945 "num_base_bdevs_discovered": 1, 00:15:45.945 "num_base_bdevs_operational": 4, 00:15:45.945 "base_bdevs_list": [ 00:15:45.945 { 00:15:45.945 "name": "BaseBdev1", 00:15:45.945 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:45.945 "is_configured": true, 00:15:45.945 "data_offset": 2048, 00:15:45.945 "data_size": 63488 00:15:45.945 }, 00:15:45.945 { 00:15:45.945 "name": "BaseBdev2", 00:15:45.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.945 "is_configured": false, 00:15:45.945 "data_offset": 0, 00:15:45.945 "data_size": 0 00:15:45.945 }, 00:15:45.945 { 00:15:45.945 "name": "BaseBdev3", 00:15:45.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.945 "is_configured": false, 00:15:45.945 "data_offset": 0, 00:15:45.945 "data_size": 0 00:15:45.945 }, 00:15:45.945 { 00:15:45.945 "name": "BaseBdev4", 00:15:45.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.945 "is_configured": false, 00:15:45.945 "data_offset": 0, 00:15:45.945 "data_size": 0 00:15:45.945 } 00:15:45.945 ] 00:15:45.945 }' 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.945 21:23:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 [2024-11-26 21:23:04.304490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.205 [2024-11-26 21:23:04.304577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 [2024-11-26 21:23:04.316536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.205 [2024-11-26 21:23:04.318549] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.205 [2024-11-26 21:23:04.318590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.205 [2024-11-26 21:23:04.318600] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.205 [2024-11-26 21:23:04.318609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.205 [2024-11-26 21:23:04.318616] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.205 [2024-11-26 21:23:04.318623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.465 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.465 "name": "Existed_Raid", 00:15:46.465 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:46.465 "strip_size_kb": 64, 00:15:46.465 "state": "configuring", 00:15:46.465 "raid_level": "raid5f", 00:15:46.466 "superblock": true, 00:15:46.466 "num_base_bdevs": 4, 00:15:46.466 "num_base_bdevs_discovered": 1, 00:15:46.466 "num_base_bdevs_operational": 4, 00:15:46.466 "base_bdevs_list": [ 00:15:46.466 { 00:15:46.466 "name": "BaseBdev1", 00:15:46.466 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:46.466 "is_configured": true, 00:15:46.466 "data_offset": 2048, 00:15:46.466 "data_size": 63488 00:15:46.466 }, 00:15:46.466 { 00:15:46.466 "name": "BaseBdev2", 00:15:46.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.466 "is_configured": false, 00:15:46.466 "data_offset": 0, 00:15:46.466 "data_size": 0 00:15:46.466 }, 00:15:46.466 { 00:15:46.466 "name": "BaseBdev3", 00:15:46.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.466 "is_configured": false, 00:15:46.466 "data_offset": 0, 00:15:46.466 "data_size": 0 00:15:46.466 }, 00:15:46.466 { 00:15:46.466 "name": "BaseBdev4", 00:15:46.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.466 "is_configured": false, 00:15:46.466 "data_offset": 0, 00:15:46.466 "data_size": 0 00:15:46.466 } 00:15:46.466 ] 00:15:46.466 }' 00:15:46.466 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.466 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.726 [2024-11-26 21:23:04.759571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.726 BaseBdev2 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.726 [ 00:15:46.726 { 00:15:46.726 "name": "BaseBdev2", 00:15:46.726 "aliases": [ 00:15:46.726 "52f0b959-102b-41f3-a9f5-c39eb8ec0b48" 00:15:46.726 ], 00:15:46.726 "product_name": "Malloc disk", 00:15:46.726 "block_size": 512, 00:15:46.726 "num_blocks": 65536, 00:15:46.726 "uuid": "52f0b959-102b-41f3-a9f5-c39eb8ec0b48", 00:15:46.726 "assigned_rate_limits": { 00:15:46.726 "rw_ios_per_sec": 0, 00:15:46.726 "rw_mbytes_per_sec": 0, 00:15:46.726 "r_mbytes_per_sec": 0, 00:15:46.726 "w_mbytes_per_sec": 0 00:15:46.726 }, 00:15:46.726 "claimed": true, 00:15:46.726 "claim_type": "exclusive_write", 00:15:46.726 "zoned": false, 00:15:46.726 "supported_io_types": { 00:15:46.726 "read": true, 00:15:46.726 "write": true, 00:15:46.726 "unmap": true, 00:15:46.726 "flush": true, 00:15:46.726 "reset": true, 00:15:46.726 "nvme_admin": false, 00:15:46.726 "nvme_io": false, 00:15:46.726 "nvme_io_md": false, 00:15:46.726 "write_zeroes": true, 00:15:46.726 "zcopy": true, 00:15:46.726 "get_zone_info": false, 00:15:46.726 "zone_management": false, 00:15:46.726 "zone_append": false, 00:15:46.726 "compare": false, 00:15:46.726 "compare_and_write": false, 00:15:46.726 "abort": true, 00:15:46.726 "seek_hole": false, 00:15:46.726 "seek_data": false, 00:15:46.726 "copy": true, 00:15:46.726 "nvme_iov_md": false 00:15:46.726 }, 00:15:46.726 "memory_domains": [ 00:15:46.726 { 00:15:46.726 "dma_device_id": "system", 00:15:46.726 "dma_device_type": 1 00:15:46.726 }, 00:15:46.726 { 00:15:46.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.726 "dma_device_type": 2 00:15:46.726 } 00:15:46.726 ], 00:15:46.726 "driver_specific": {} 00:15:46.726 } 00:15:46.726 ] 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.726 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.726 "name": "Existed_Raid", 00:15:46.726 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:46.726 "strip_size_kb": 64, 00:15:46.726 "state": "configuring", 00:15:46.726 "raid_level": "raid5f", 00:15:46.726 "superblock": true, 00:15:46.726 "num_base_bdevs": 4, 00:15:46.726 "num_base_bdevs_discovered": 2, 00:15:46.726 "num_base_bdevs_operational": 4, 00:15:46.726 "base_bdevs_list": [ 00:15:46.726 { 00:15:46.726 "name": "BaseBdev1", 00:15:46.726 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:46.726 "is_configured": true, 00:15:46.726 "data_offset": 2048, 00:15:46.727 "data_size": 63488 00:15:46.727 }, 00:15:46.727 { 00:15:46.727 "name": "BaseBdev2", 00:15:46.727 "uuid": "52f0b959-102b-41f3-a9f5-c39eb8ec0b48", 00:15:46.727 "is_configured": true, 00:15:46.727 "data_offset": 2048, 00:15:46.727 "data_size": 63488 00:15:46.727 }, 00:15:46.727 { 00:15:46.727 "name": "BaseBdev3", 00:15:46.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.727 "is_configured": false, 00:15:46.727 "data_offset": 0, 00:15:46.727 "data_size": 0 00:15:46.727 }, 00:15:46.727 { 00:15:46.727 "name": "BaseBdev4", 00:15:46.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.727 "is_configured": false, 00:15:46.727 "data_offset": 0, 00:15:46.727 "data_size": 0 00:15:46.727 } 00:15:46.727 ] 00:15:46.727 }' 00:15:46.727 21:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.727 21:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.298 [2024-11-26 21:23:05.296655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.298 BaseBdev3 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.298 [ 00:15:47.298 { 00:15:47.298 "name": "BaseBdev3", 00:15:47.298 "aliases": [ 00:15:47.298 "557a1df2-6eac-4b7b-9099-cb5326f3a2a9" 00:15:47.298 ], 00:15:47.298 "product_name": "Malloc disk", 00:15:47.298 "block_size": 512, 00:15:47.298 "num_blocks": 65536, 00:15:47.298 "uuid": "557a1df2-6eac-4b7b-9099-cb5326f3a2a9", 00:15:47.298 "assigned_rate_limits": { 00:15:47.298 "rw_ios_per_sec": 0, 00:15:47.298 "rw_mbytes_per_sec": 0, 00:15:47.298 "r_mbytes_per_sec": 0, 00:15:47.298 "w_mbytes_per_sec": 0 00:15:47.298 }, 00:15:47.298 "claimed": true, 00:15:47.298 "claim_type": "exclusive_write", 00:15:47.298 "zoned": false, 00:15:47.298 "supported_io_types": { 00:15:47.298 "read": true, 00:15:47.298 "write": true, 00:15:47.298 "unmap": true, 00:15:47.298 "flush": true, 00:15:47.298 "reset": true, 00:15:47.298 "nvme_admin": false, 00:15:47.298 "nvme_io": false, 00:15:47.298 "nvme_io_md": false, 00:15:47.298 "write_zeroes": true, 00:15:47.298 "zcopy": true, 00:15:47.298 "get_zone_info": false, 00:15:47.298 "zone_management": false, 00:15:47.298 "zone_append": false, 00:15:47.298 "compare": false, 00:15:47.298 "compare_and_write": false, 00:15:47.298 "abort": true, 00:15:47.298 "seek_hole": false, 00:15:47.298 "seek_data": false, 00:15:47.298 "copy": true, 00:15:47.298 "nvme_iov_md": false 00:15:47.298 }, 00:15:47.298 "memory_domains": [ 00:15:47.298 { 00:15:47.298 "dma_device_id": "system", 00:15:47.298 "dma_device_type": 1 00:15:47.298 }, 00:15:47.298 { 00:15:47.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.298 "dma_device_type": 2 00:15:47.298 } 00:15:47.298 ], 00:15:47.298 "driver_specific": {} 00:15:47.298 } 00:15:47.298 ] 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.298 "name": "Existed_Raid", 00:15:47.298 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:47.298 "strip_size_kb": 64, 00:15:47.298 "state": "configuring", 00:15:47.298 "raid_level": "raid5f", 00:15:47.298 "superblock": true, 00:15:47.298 "num_base_bdevs": 4, 00:15:47.298 "num_base_bdevs_discovered": 3, 00:15:47.298 "num_base_bdevs_operational": 4, 00:15:47.298 "base_bdevs_list": [ 00:15:47.298 { 00:15:47.298 "name": "BaseBdev1", 00:15:47.298 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:47.298 "is_configured": true, 00:15:47.298 "data_offset": 2048, 00:15:47.298 "data_size": 63488 00:15:47.298 }, 00:15:47.298 { 00:15:47.298 "name": "BaseBdev2", 00:15:47.298 "uuid": "52f0b959-102b-41f3-a9f5-c39eb8ec0b48", 00:15:47.298 "is_configured": true, 00:15:47.298 "data_offset": 2048, 00:15:47.298 "data_size": 63488 00:15:47.298 }, 00:15:47.298 { 00:15:47.298 "name": "BaseBdev3", 00:15:47.298 "uuid": "557a1df2-6eac-4b7b-9099-cb5326f3a2a9", 00:15:47.298 "is_configured": true, 00:15:47.298 "data_offset": 2048, 00:15:47.298 "data_size": 63488 00:15:47.298 }, 00:15:47.298 { 00:15:47.298 "name": "BaseBdev4", 00:15:47.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.298 "is_configured": false, 00:15:47.298 "data_offset": 0, 00:15:47.298 "data_size": 0 00:15:47.298 } 00:15:47.298 ] 00:15:47.298 }' 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.298 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 [2024-11-26 21:23:05.831302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.869 [2024-11-26 21:23:05.831655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.869 [2024-11-26 21:23:05.831676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.869 [2024-11-26 21:23:05.831955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:47.869 BaseBdev4 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 [2024-11-26 21:23:05.838920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.869 [2024-11-26 21:23:05.839005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.869 [2024-11-26 21:23:05.839306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 [ 00:15:47.869 { 00:15:47.869 "name": "BaseBdev4", 00:15:47.869 "aliases": [ 00:15:47.869 "13d6c21f-4a4a-40ca-ae77-2e167881dd03" 00:15:47.869 ], 00:15:47.869 "product_name": "Malloc disk", 00:15:47.869 "block_size": 512, 00:15:47.869 "num_blocks": 65536, 00:15:47.869 "uuid": "13d6c21f-4a4a-40ca-ae77-2e167881dd03", 00:15:47.869 "assigned_rate_limits": { 00:15:47.869 "rw_ios_per_sec": 0, 00:15:47.869 "rw_mbytes_per_sec": 0, 00:15:47.869 "r_mbytes_per_sec": 0, 00:15:47.869 "w_mbytes_per_sec": 0 00:15:47.869 }, 00:15:47.869 "claimed": true, 00:15:47.869 "claim_type": "exclusive_write", 00:15:47.869 "zoned": false, 00:15:47.869 "supported_io_types": { 00:15:47.869 "read": true, 00:15:47.869 "write": true, 00:15:47.869 "unmap": true, 00:15:47.869 "flush": true, 00:15:47.869 "reset": true, 00:15:47.869 "nvme_admin": false, 00:15:47.869 "nvme_io": false, 00:15:47.869 "nvme_io_md": false, 00:15:47.869 "write_zeroes": true, 00:15:47.869 "zcopy": true, 00:15:47.869 "get_zone_info": false, 00:15:47.869 "zone_management": false, 00:15:47.869 "zone_append": false, 00:15:47.869 "compare": false, 00:15:47.869 "compare_and_write": false, 00:15:47.869 "abort": true, 00:15:47.869 "seek_hole": false, 00:15:47.869 "seek_data": false, 00:15:47.869 "copy": true, 00:15:47.869 "nvme_iov_md": false 00:15:47.869 }, 00:15:47.869 "memory_domains": [ 00:15:47.869 { 00:15:47.869 "dma_device_id": "system", 00:15:47.869 "dma_device_type": 1 00:15:47.869 }, 00:15:47.869 { 00:15:47.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.869 "dma_device_type": 2 00:15:47.869 } 00:15:47.869 ], 00:15:47.869 "driver_specific": {} 00:15:47.869 } 00:15:47.869 ] 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.869 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.869 "name": "Existed_Raid", 00:15:47.869 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:47.869 "strip_size_kb": 64, 00:15:47.869 "state": "online", 00:15:47.869 "raid_level": "raid5f", 00:15:47.869 "superblock": true, 00:15:47.869 "num_base_bdevs": 4, 00:15:47.869 "num_base_bdevs_discovered": 4, 00:15:47.869 "num_base_bdevs_operational": 4, 00:15:47.869 "base_bdevs_list": [ 00:15:47.869 { 00:15:47.869 "name": "BaseBdev1", 00:15:47.869 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:47.869 "is_configured": true, 00:15:47.869 "data_offset": 2048, 00:15:47.869 "data_size": 63488 00:15:47.869 }, 00:15:47.869 { 00:15:47.869 "name": "BaseBdev2", 00:15:47.869 "uuid": "52f0b959-102b-41f3-a9f5-c39eb8ec0b48", 00:15:47.869 "is_configured": true, 00:15:47.869 "data_offset": 2048, 00:15:47.869 "data_size": 63488 00:15:47.869 }, 00:15:47.869 { 00:15:47.869 "name": "BaseBdev3", 00:15:47.869 "uuid": "557a1df2-6eac-4b7b-9099-cb5326f3a2a9", 00:15:47.869 "is_configured": true, 00:15:47.869 "data_offset": 2048, 00:15:47.869 "data_size": 63488 00:15:47.869 }, 00:15:47.869 { 00:15:47.869 "name": "BaseBdev4", 00:15:47.869 "uuid": "13d6c21f-4a4a-40ca-ae77-2e167881dd03", 00:15:47.869 "is_configured": true, 00:15:47.869 "data_offset": 2048, 00:15:47.869 "data_size": 63488 00:15:47.869 } 00:15:47.869 ] 00:15:47.869 }' 00:15:47.870 21:23:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.870 21:23:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.441 [2024-11-26 21:23:06.339334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.441 "name": "Existed_Raid", 00:15:48.441 "aliases": [ 00:15:48.441 "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92" 00:15:48.441 ], 00:15:48.441 "product_name": "Raid Volume", 00:15:48.441 "block_size": 512, 00:15:48.441 "num_blocks": 190464, 00:15:48.441 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:48.441 "assigned_rate_limits": { 00:15:48.441 "rw_ios_per_sec": 0, 00:15:48.441 "rw_mbytes_per_sec": 0, 00:15:48.441 "r_mbytes_per_sec": 0, 00:15:48.441 "w_mbytes_per_sec": 0 00:15:48.441 }, 00:15:48.441 "claimed": false, 00:15:48.441 "zoned": false, 00:15:48.441 "supported_io_types": { 00:15:48.441 "read": true, 00:15:48.441 "write": true, 00:15:48.441 "unmap": false, 00:15:48.441 "flush": false, 00:15:48.441 "reset": true, 00:15:48.441 "nvme_admin": false, 00:15:48.441 "nvme_io": false, 00:15:48.441 "nvme_io_md": false, 00:15:48.441 "write_zeroes": true, 00:15:48.441 "zcopy": false, 00:15:48.441 "get_zone_info": false, 00:15:48.441 "zone_management": false, 00:15:48.441 "zone_append": false, 00:15:48.441 "compare": false, 00:15:48.441 "compare_and_write": false, 00:15:48.441 "abort": false, 00:15:48.441 "seek_hole": false, 00:15:48.441 "seek_data": false, 00:15:48.441 "copy": false, 00:15:48.441 "nvme_iov_md": false 00:15:48.441 }, 00:15:48.441 "driver_specific": { 00:15:48.441 "raid": { 00:15:48.441 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:48.441 "strip_size_kb": 64, 00:15:48.441 "state": "online", 00:15:48.441 "raid_level": "raid5f", 00:15:48.441 "superblock": true, 00:15:48.441 "num_base_bdevs": 4, 00:15:48.441 "num_base_bdevs_discovered": 4, 00:15:48.441 "num_base_bdevs_operational": 4, 00:15:48.441 "base_bdevs_list": [ 00:15:48.441 { 00:15:48.441 "name": "BaseBdev1", 00:15:48.441 "uuid": "110ca095-667a-4bd2-b7a5-40feec58c44b", 00:15:48.441 "is_configured": true, 00:15:48.441 "data_offset": 2048, 00:15:48.441 "data_size": 63488 00:15:48.441 }, 00:15:48.441 { 00:15:48.441 "name": "BaseBdev2", 00:15:48.441 "uuid": "52f0b959-102b-41f3-a9f5-c39eb8ec0b48", 00:15:48.441 "is_configured": true, 00:15:48.441 "data_offset": 2048, 00:15:48.441 "data_size": 63488 00:15:48.441 }, 00:15:48.441 { 00:15:48.441 "name": "BaseBdev3", 00:15:48.441 "uuid": "557a1df2-6eac-4b7b-9099-cb5326f3a2a9", 00:15:48.441 "is_configured": true, 00:15:48.441 "data_offset": 2048, 00:15:48.441 "data_size": 63488 00:15:48.441 }, 00:15:48.441 { 00:15:48.441 "name": "BaseBdev4", 00:15:48.441 "uuid": "13d6c21f-4a4a-40ca-ae77-2e167881dd03", 00:15:48.441 "is_configured": true, 00:15:48.441 "data_offset": 2048, 00:15:48.441 "data_size": 63488 00:15:48.441 } 00:15:48.441 ] 00:15:48.441 } 00:15:48.441 } 00:15:48.441 }' 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.441 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:48.441 BaseBdev2 00:15:48.441 BaseBdev3 00:15:48.441 BaseBdev4' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.442 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.702 [2024-11-26 21:23:06.642676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.702 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.702 "name": "Existed_Raid", 00:15:48.702 "uuid": "ac9a4ca2-f120-4361-86a7-ef4c5d4f0d92", 00:15:48.703 "strip_size_kb": 64, 00:15:48.703 "state": "online", 00:15:48.703 "raid_level": "raid5f", 00:15:48.703 "superblock": true, 00:15:48.703 "num_base_bdevs": 4, 00:15:48.703 "num_base_bdevs_discovered": 3, 00:15:48.703 "num_base_bdevs_operational": 3, 00:15:48.703 "base_bdevs_list": [ 00:15:48.703 { 00:15:48.703 "name": null, 00:15:48.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.703 "is_configured": false, 00:15:48.703 "data_offset": 0, 00:15:48.703 "data_size": 63488 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "name": "BaseBdev2", 00:15:48.703 "uuid": "52f0b959-102b-41f3-a9f5-c39eb8ec0b48", 00:15:48.703 "is_configured": true, 00:15:48.703 "data_offset": 2048, 00:15:48.703 "data_size": 63488 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "name": "BaseBdev3", 00:15:48.703 "uuid": "557a1df2-6eac-4b7b-9099-cb5326f3a2a9", 00:15:48.703 "is_configured": true, 00:15:48.703 "data_offset": 2048, 00:15:48.703 "data_size": 63488 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "name": "BaseBdev4", 00:15:48.703 "uuid": "13d6c21f-4a4a-40ca-ae77-2e167881dd03", 00:15:48.703 "is_configured": true, 00:15:48.703 "data_offset": 2048, 00:15:48.703 "data_size": 63488 00:15:48.703 } 00:15:48.703 ] 00:15:48.703 }' 00:15:48.703 21:23:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.703 21:23:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.273 [2024-11-26 21:23:07.220265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:49.273 [2024-11-26 21:23:07.220511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.273 [2024-11-26 21:23:07.321409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.273 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.274 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:49.274 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.274 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.274 [2024-11-26 21:23:07.377314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.534 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.535 [2024-11-26 21:23:07.537721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:49.535 [2024-11-26 21:23:07.537779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.535 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 BaseBdev2 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 [ 00:15:49.796 { 00:15:49.796 "name": "BaseBdev2", 00:15:49.796 "aliases": [ 00:15:49.796 "a8c89794-24a9-48ac-8691-4200bb50af80" 00:15:49.796 ], 00:15:49.796 "product_name": "Malloc disk", 00:15:49.796 "block_size": 512, 00:15:49.796 "num_blocks": 65536, 00:15:49.796 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:49.796 "assigned_rate_limits": { 00:15:49.796 "rw_ios_per_sec": 0, 00:15:49.796 "rw_mbytes_per_sec": 0, 00:15:49.796 "r_mbytes_per_sec": 0, 00:15:49.796 "w_mbytes_per_sec": 0 00:15:49.796 }, 00:15:49.796 "claimed": false, 00:15:49.796 "zoned": false, 00:15:49.796 "supported_io_types": { 00:15:49.796 "read": true, 00:15:49.796 "write": true, 00:15:49.796 "unmap": true, 00:15:49.796 "flush": true, 00:15:49.796 "reset": true, 00:15:49.796 "nvme_admin": false, 00:15:49.796 "nvme_io": false, 00:15:49.796 "nvme_io_md": false, 00:15:49.796 "write_zeroes": true, 00:15:49.796 "zcopy": true, 00:15:49.796 "get_zone_info": false, 00:15:49.796 "zone_management": false, 00:15:49.796 "zone_append": false, 00:15:49.796 "compare": false, 00:15:49.796 "compare_and_write": false, 00:15:49.796 "abort": true, 00:15:49.796 "seek_hole": false, 00:15:49.796 "seek_data": false, 00:15:49.796 "copy": true, 00:15:49.796 "nvme_iov_md": false 00:15:49.796 }, 00:15:49.796 "memory_domains": [ 00:15:49.796 { 00:15:49.796 "dma_device_id": "system", 00:15:49.796 "dma_device_type": 1 00:15:49.796 }, 00:15:49.796 { 00:15:49.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.796 "dma_device_type": 2 00:15:49.796 } 00:15:49.796 ], 00:15:49.796 "driver_specific": {} 00:15:49.796 } 00:15:49.796 ] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 BaseBdev3 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 [ 00:15:49.796 { 00:15:49.796 "name": "BaseBdev3", 00:15:49.796 "aliases": [ 00:15:49.796 "e0d0e454-4c22-4d30-af31-ffdfb573650d" 00:15:49.796 ], 00:15:49.796 "product_name": "Malloc disk", 00:15:49.796 "block_size": 512, 00:15:49.796 "num_blocks": 65536, 00:15:49.796 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:49.796 "assigned_rate_limits": { 00:15:49.796 "rw_ios_per_sec": 0, 00:15:49.796 "rw_mbytes_per_sec": 0, 00:15:49.796 "r_mbytes_per_sec": 0, 00:15:49.796 "w_mbytes_per_sec": 0 00:15:49.796 }, 00:15:49.796 "claimed": false, 00:15:49.796 "zoned": false, 00:15:49.796 "supported_io_types": { 00:15:49.796 "read": true, 00:15:49.796 "write": true, 00:15:49.796 "unmap": true, 00:15:49.796 "flush": true, 00:15:49.796 "reset": true, 00:15:49.796 "nvme_admin": false, 00:15:49.796 "nvme_io": false, 00:15:49.796 "nvme_io_md": false, 00:15:49.796 "write_zeroes": true, 00:15:49.796 "zcopy": true, 00:15:49.796 "get_zone_info": false, 00:15:49.796 "zone_management": false, 00:15:49.796 "zone_append": false, 00:15:49.796 "compare": false, 00:15:49.796 "compare_and_write": false, 00:15:49.796 "abort": true, 00:15:49.796 "seek_hole": false, 00:15:49.796 "seek_data": false, 00:15:49.796 "copy": true, 00:15:49.796 "nvme_iov_md": false 00:15:49.796 }, 00:15:49.796 "memory_domains": [ 00:15:49.796 { 00:15:49.796 "dma_device_id": "system", 00:15:49.796 "dma_device_type": 1 00:15:49.796 }, 00:15:49.796 { 00:15:49.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.796 "dma_device_type": 2 00:15:49.796 } 00:15:49.796 ], 00:15:49.796 "driver_specific": {} 00:15:49.796 } 00:15:49.796 ] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.796 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.796 BaseBdev4 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.797 [ 00:15:49.797 { 00:15:49.797 "name": "BaseBdev4", 00:15:49.797 "aliases": [ 00:15:49.797 "024b70d8-9199-44b7-a042-3179b463b9e5" 00:15:49.797 ], 00:15:49.797 "product_name": "Malloc disk", 00:15:49.797 "block_size": 512, 00:15:49.797 "num_blocks": 65536, 00:15:49.797 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:49.797 "assigned_rate_limits": { 00:15:49.797 "rw_ios_per_sec": 0, 00:15:49.797 "rw_mbytes_per_sec": 0, 00:15:49.797 "r_mbytes_per_sec": 0, 00:15:49.797 "w_mbytes_per_sec": 0 00:15:49.797 }, 00:15:49.797 "claimed": false, 00:15:49.797 "zoned": false, 00:15:49.797 "supported_io_types": { 00:15:49.797 "read": true, 00:15:49.797 "write": true, 00:15:49.797 "unmap": true, 00:15:49.797 "flush": true, 00:15:49.797 "reset": true, 00:15:49.797 "nvme_admin": false, 00:15:49.797 "nvme_io": false, 00:15:49.797 "nvme_io_md": false, 00:15:49.797 "write_zeroes": true, 00:15:49.797 "zcopy": true, 00:15:49.797 "get_zone_info": false, 00:15:49.797 "zone_management": false, 00:15:49.797 "zone_append": false, 00:15:49.797 "compare": false, 00:15:49.797 "compare_and_write": false, 00:15:49.797 "abort": true, 00:15:49.797 "seek_hole": false, 00:15:49.797 "seek_data": false, 00:15:49.797 "copy": true, 00:15:49.797 "nvme_iov_md": false 00:15:49.797 }, 00:15:49.797 "memory_domains": [ 00:15:49.797 { 00:15:49.797 "dma_device_id": "system", 00:15:49.797 "dma_device_type": 1 00:15:49.797 }, 00:15:49.797 { 00:15:49.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.797 "dma_device_type": 2 00:15:49.797 } 00:15:49.797 ], 00:15:49.797 "driver_specific": {} 00:15:49.797 } 00:15:49.797 ] 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.797 [2024-11-26 21:23:07.944284] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.797 [2024-11-26 21:23:07.944434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.797 [2024-11-26 21:23:07.944480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.797 [2024-11-26 21:23:07.946614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.797 [2024-11-26 21:23:07.946723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.797 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.057 "name": "Existed_Raid", 00:15:50.057 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:50.057 "strip_size_kb": 64, 00:15:50.057 "state": "configuring", 00:15:50.057 "raid_level": "raid5f", 00:15:50.057 "superblock": true, 00:15:50.057 "num_base_bdevs": 4, 00:15:50.057 "num_base_bdevs_discovered": 3, 00:15:50.057 "num_base_bdevs_operational": 4, 00:15:50.057 "base_bdevs_list": [ 00:15:50.057 { 00:15:50.057 "name": "BaseBdev1", 00:15:50.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.057 "is_configured": false, 00:15:50.057 "data_offset": 0, 00:15:50.057 "data_size": 0 00:15:50.057 }, 00:15:50.057 { 00:15:50.057 "name": "BaseBdev2", 00:15:50.057 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:50.057 "is_configured": true, 00:15:50.057 "data_offset": 2048, 00:15:50.057 "data_size": 63488 00:15:50.057 }, 00:15:50.057 { 00:15:50.057 "name": "BaseBdev3", 00:15:50.057 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:50.057 "is_configured": true, 00:15:50.057 "data_offset": 2048, 00:15:50.057 "data_size": 63488 00:15:50.057 }, 00:15:50.057 { 00:15:50.057 "name": "BaseBdev4", 00:15:50.057 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:50.057 "is_configured": true, 00:15:50.057 "data_offset": 2048, 00:15:50.057 "data_size": 63488 00:15:50.057 } 00:15:50.057 ] 00:15:50.057 }' 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.057 21:23:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.317 [2024-11-26 21:23:08.355792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.317 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.317 "name": "Existed_Raid", 00:15:50.317 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:50.317 "strip_size_kb": 64, 00:15:50.317 "state": "configuring", 00:15:50.317 "raid_level": "raid5f", 00:15:50.317 "superblock": true, 00:15:50.317 "num_base_bdevs": 4, 00:15:50.317 "num_base_bdevs_discovered": 2, 00:15:50.318 "num_base_bdevs_operational": 4, 00:15:50.318 "base_bdevs_list": [ 00:15:50.318 { 00:15:50.318 "name": "BaseBdev1", 00:15:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.318 "is_configured": false, 00:15:50.318 "data_offset": 0, 00:15:50.318 "data_size": 0 00:15:50.318 }, 00:15:50.318 { 00:15:50.318 "name": null, 00:15:50.318 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:50.318 "is_configured": false, 00:15:50.318 "data_offset": 0, 00:15:50.318 "data_size": 63488 00:15:50.318 }, 00:15:50.318 { 00:15:50.318 "name": "BaseBdev3", 00:15:50.318 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:50.318 "is_configured": true, 00:15:50.318 "data_offset": 2048, 00:15:50.318 "data_size": 63488 00:15:50.318 }, 00:15:50.318 { 00:15:50.318 "name": "BaseBdev4", 00:15:50.318 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:50.318 "is_configured": true, 00:15:50.318 "data_offset": 2048, 00:15:50.318 "data_size": 63488 00:15:50.318 } 00:15:50.318 ] 00:15:50.318 }' 00:15:50.318 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.318 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.888 [2024-11-26 21:23:08.852429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.888 BaseBdev1 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.888 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.889 [ 00:15:50.889 { 00:15:50.889 "name": "BaseBdev1", 00:15:50.889 "aliases": [ 00:15:50.889 "2975778b-ea62-47c3-8242-3ae4f208b456" 00:15:50.889 ], 00:15:50.889 "product_name": "Malloc disk", 00:15:50.889 "block_size": 512, 00:15:50.889 "num_blocks": 65536, 00:15:50.889 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:50.889 "assigned_rate_limits": { 00:15:50.889 "rw_ios_per_sec": 0, 00:15:50.889 "rw_mbytes_per_sec": 0, 00:15:50.889 "r_mbytes_per_sec": 0, 00:15:50.889 "w_mbytes_per_sec": 0 00:15:50.889 }, 00:15:50.889 "claimed": true, 00:15:50.889 "claim_type": "exclusive_write", 00:15:50.889 "zoned": false, 00:15:50.889 "supported_io_types": { 00:15:50.889 "read": true, 00:15:50.889 "write": true, 00:15:50.889 "unmap": true, 00:15:50.889 "flush": true, 00:15:50.889 "reset": true, 00:15:50.889 "nvme_admin": false, 00:15:50.889 "nvme_io": false, 00:15:50.889 "nvme_io_md": false, 00:15:50.889 "write_zeroes": true, 00:15:50.889 "zcopy": true, 00:15:50.889 "get_zone_info": false, 00:15:50.889 "zone_management": false, 00:15:50.889 "zone_append": false, 00:15:50.889 "compare": false, 00:15:50.889 "compare_and_write": false, 00:15:50.889 "abort": true, 00:15:50.889 "seek_hole": false, 00:15:50.889 "seek_data": false, 00:15:50.889 "copy": true, 00:15:50.889 "nvme_iov_md": false 00:15:50.889 }, 00:15:50.889 "memory_domains": [ 00:15:50.889 { 00:15:50.889 "dma_device_id": "system", 00:15:50.889 "dma_device_type": 1 00:15:50.889 }, 00:15:50.889 { 00:15:50.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.889 "dma_device_type": 2 00:15:50.889 } 00:15:50.889 ], 00:15:50.889 "driver_specific": {} 00:15:50.889 } 00:15:50.889 ] 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.889 "name": "Existed_Raid", 00:15:50.889 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:50.889 "strip_size_kb": 64, 00:15:50.889 "state": "configuring", 00:15:50.889 "raid_level": "raid5f", 00:15:50.889 "superblock": true, 00:15:50.889 "num_base_bdevs": 4, 00:15:50.889 "num_base_bdevs_discovered": 3, 00:15:50.889 "num_base_bdevs_operational": 4, 00:15:50.889 "base_bdevs_list": [ 00:15:50.889 { 00:15:50.889 "name": "BaseBdev1", 00:15:50.889 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:50.889 "is_configured": true, 00:15:50.889 "data_offset": 2048, 00:15:50.889 "data_size": 63488 00:15:50.889 }, 00:15:50.889 { 00:15:50.889 "name": null, 00:15:50.889 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:50.889 "is_configured": false, 00:15:50.889 "data_offset": 0, 00:15:50.889 "data_size": 63488 00:15:50.889 }, 00:15:50.889 { 00:15:50.889 "name": "BaseBdev3", 00:15:50.889 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:50.889 "is_configured": true, 00:15:50.889 "data_offset": 2048, 00:15:50.889 "data_size": 63488 00:15:50.889 }, 00:15:50.889 { 00:15:50.889 "name": "BaseBdev4", 00:15:50.889 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:50.889 "is_configured": true, 00:15:50.889 "data_offset": 2048, 00:15:50.889 "data_size": 63488 00:15:50.889 } 00:15:50.889 ] 00:15:50.889 }' 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.889 21:23:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 [2024-11-26 21:23:09.427556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.459 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.460 "name": "Existed_Raid", 00:15:51.460 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:51.460 "strip_size_kb": 64, 00:15:51.460 "state": "configuring", 00:15:51.460 "raid_level": "raid5f", 00:15:51.460 "superblock": true, 00:15:51.460 "num_base_bdevs": 4, 00:15:51.460 "num_base_bdevs_discovered": 2, 00:15:51.460 "num_base_bdevs_operational": 4, 00:15:51.460 "base_bdevs_list": [ 00:15:51.460 { 00:15:51.460 "name": "BaseBdev1", 00:15:51.460 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:51.460 "is_configured": true, 00:15:51.460 "data_offset": 2048, 00:15:51.460 "data_size": 63488 00:15:51.460 }, 00:15:51.460 { 00:15:51.460 "name": null, 00:15:51.460 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:51.460 "is_configured": false, 00:15:51.460 "data_offset": 0, 00:15:51.460 "data_size": 63488 00:15:51.460 }, 00:15:51.460 { 00:15:51.460 "name": null, 00:15:51.460 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:51.460 "is_configured": false, 00:15:51.460 "data_offset": 0, 00:15:51.460 "data_size": 63488 00:15:51.460 }, 00:15:51.460 { 00:15:51.460 "name": "BaseBdev4", 00:15:51.460 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:51.460 "is_configured": true, 00:15:51.460 "data_offset": 2048, 00:15:51.460 "data_size": 63488 00:15:51.460 } 00:15:51.460 ] 00:15:51.460 }' 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.460 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.720 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.720 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.720 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.720 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.980 [2024-11-26 21:23:09.926678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.980 "name": "Existed_Raid", 00:15:51.980 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:51.980 "strip_size_kb": 64, 00:15:51.980 "state": "configuring", 00:15:51.980 "raid_level": "raid5f", 00:15:51.980 "superblock": true, 00:15:51.980 "num_base_bdevs": 4, 00:15:51.980 "num_base_bdevs_discovered": 3, 00:15:51.980 "num_base_bdevs_operational": 4, 00:15:51.980 "base_bdevs_list": [ 00:15:51.980 { 00:15:51.980 "name": "BaseBdev1", 00:15:51.980 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:51.980 "is_configured": true, 00:15:51.980 "data_offset": 2048, 00:15:51.980 "data_size": 63488 00:15:51.980 }, 00:15:51.980 { 00:15:51.980 "name": null, 00:15:51.980 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:51.980 "is_configured": false, 00:15:51.980 "data_offset": 0, 00:15:51.980 "data_size": 63488 00:15:51.980 }, 00:15:51.980 { 00:15:51.980 "name": "BaseBdev3", 00:15:51.980 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:51.980 "is_configured": true, 00:15:51.980 "data_offset": 2048, 00:15:51.980 "data_size": 63488 00:15:51.980 }, 00:15:51.980 { 00:15:51.980 "name": "BaseBdev4", 00:15:51.980 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:51.980 "is_configured": true, 00:15:51.980 "data_offset": 2048, 00:15:51.980 "data_size": 63488 00:15:51.980 } 00:15:51.980 ] 00:15:51.980 }' 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.980 21:23:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.239 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 [2024-11-26 21:23:10.389920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.499 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.499 "name": "Existed_Raid", 00:15:52.499 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:52.499 "strip_size_kb": 64, 00:15:52.499 "state": "configuring", 00:15:52.499 "raid_level": "raid5f", 00:15:52.499 "superblock": true, 00:15:52.499 "num_base_bdevs": 4, 00:15:52.499 "num_base_bdevs_discovered": 2, 00:15:52.499 "num_base_bdevs_operational": 4, 00:15:52.499 "base_bdevs_list": [ 00:15:52.499 { 00:15:52.499 "name": null, 00:15:52.499 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:52.499 "is_configured": false, 00:15:52.499 "data_offset": 0, 00:15:52.499 "data_size": 63488 00:15:52.499 }, 00:15:52.499 { 00:15:52.500 "name": null, 00:15:52.500 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:52.500 "is_configured": false, 00:15:52.500 "data_offset": 0, 00:15:52.500 "data_size": 63488 00:15:52.500 }, 00:15:52.500 { 00:15:52.500 "name": "BaseBdev3", 00:15:52.500 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:52.500 "is_configured": true, 00:15:52.500 "data_offset": 2048, 00:15:52.500 "data_size": 63488 00:15:52.500 }, 00:15:52.500 { 00:15:52.500 "name": "BaseBdev4", 00:15:52.500 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:52.500 "is_configured": true, 00:15:52.500 "data_offset": 2048, 00:15:52.500 "data_size": 63488 00:15:52.500 } 00:15:52.500 ] 00:15:52.500 }' 00:15:52.500 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.500 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.070 [2024-11-26 21:23:10.983353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.070 21:23:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.070 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.070 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.070 "name": "Existed_Raid", 00:15:53.070 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:53.070 "strip_size_kb": 64, 00:15:53.070 "state": "configuring", 00:15:53.070 "raid_level": "raid5f", 00:15:53.070 "superblock": true, 00:15:53.070 "num_base_bdevs": 4, 00:15:53.070 "num_base_bdevs_discovered": 3, 00:15:53.070 "num_base_bdevs_operational": 4, 00:15:53.070 "base_bdevs_list": [ 00:15:53.070 { 00:15:53.070 "name": null, 00:15:53.070 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:53.070 "is_configured": false, 00:15:53.070 "data_offset": 0, 00:15:53.070 "data_size": 63488 00:15:53.070 }, 00:15:53.070 { 00:15:53.070 "name": "BaseBdev2", 00:15:53.070 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:53.070 "is_configured": true, 00:15:53.070 "data_offset": 2048, 00:15:53.070 "data_size": 63488 00:15:53.070 }, 00:15:53.070 { 00:15:53.070 "name": "BaseBdev3", 00:15:53.070 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:53.070 "is_configured": true, 00:15:53.070 "data_offset": 2048, 00:15:53.070 "data_size": 63488 00:15:53.070 }, 00:15:53.070 { 00:15:53.070 "name": "BaseBdev4", 00:15:53.070 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:53.070 "is_configured": true, 00:15:53.070 "data_offset": 2048, 00:15:53.070 "data_size": 63488 00:15:53.070 } 00:15:53.070 ] 00:15:53.070 }' 00:15:53.070 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.070 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.330 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.330 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:53.330 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:53.331 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2975778b-ea62-47c3-8242-3ae4f208b456 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 [2024-11-26 21:23:11.531087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:53.590 [2024-11-26 21:23:11.531369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:53.590 [2024-11-26 21:23:11.531408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.590 [2024-11-26 21:23:11.531737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:53.590 NewBaseBdev 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 [2024-11-26 21:23:11.538724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:53.590 [2024-11-26 21:23:11.538789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:53.590 [2024-11-26 21:23:11.538980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.590 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 [ 00:15:53.590 { 00:15:53.590 "name": "NewBaseBdev", 00:15:53.590 "aliases": [ 00:15:53.590 "2975778b-ea62-47c3-8242-3ae4f208b456" 00:15:53.590 ], 00:15:53.590 "product_name": "Malloc disk", 00:15:53.590 "block_size": 512, 00:15:53.591 "num_blocks": 65536, 00:15:53.591 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:53.591 "assigned_rate_limits": { 00:15:53.591 "rw_ios_per_sec": 0, 00:15:53.591 "rw_mbytes_per_sec": 0, 00:15:53.591 "r_mbytes_per_sec": 0, 00:15:53.591 "w_mbytes_per_sec": 0 00:15:53.591 }, 00:15:53.591 "claimed": true, 00:15:53.591 "claim_type": "exclusive_write", 00:15:53.591 "zoned": false, 00:15:53.591 "supported_io_types": { 00:15:53.591 "read": true, 00:15:53.591 "write": true, 00:15:53.591 "unmap": true, 00:15:53.591 "flush": true, 00:15:53.591 "reset": true, 00:15:53.591 "nvme_admin": false, 00:15:53.591 "nvme_io": false, 00:15:53.591 "nvme_io_md": false, 00:15:53.591 "write_zeroes": true, 00:15:53.591 "zcopy": true, 00:15:53.591 "get_zone_info": false, 00:15:53.591 "zone_management": false, 00:15:53.591 "zone_append": false, 00:15:53.591 "compare": false, 00:15:53.591 "compare_and_write": false, 00:15:53.591 "abort": true, 00:15:53.591 "seek_hole": false, 00:15:53.591 "seek_data": false, 00:15:53.591 "copy": true, 00:15:53.591 "nvme_iov_md": false 00:15:53.591 }, 00:15:53.591 "memory_domains": [ 00:15:53.591 { 00:15:53.591 "dma_device_id": "system", 00:15:53.591 "dma_device_type": 1 00:15:53.591 }, 00:15:53.591 { 00:15:53.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.591 "dma_device_type": 2 00:15:53.591 } 00:15:53.591 ], 00:15:53.591 "driver_specific": {} 00:15:53.591 } 00:15:53.591 ] 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.591 "name": "Existed_Raid", 00:15:53.591 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:53.591 "strip_size_kb": 64, 00:15:53.591 "state": "online", 00:15:53.591 "raid_level": "raid5f", 00:15:53.591 "superblock": true, 00:15:53.591 "num_base_bdevs": 4, 00:15:53.591 "num_base_bdevs_discovered": 4, 00:15:53.591 "num_base_bdevs_operational": 4, 00:15:53.591 "base_bdevs_list": [ 00:15:53.591 { 00:15:53.591 "name": "NewBaseBdev", 00:15:53.591 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:53.591 "is_configured": true, 00:15:53.591 "data_offset": 2048, 00:15:53.591 "data_size": 63488 00:15:53.591 }, 00:15:53.591 { 00:15:53.591 "name": "BaseBdev2", 00:15:53.591 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:53.591 "is_configured": true, 00:15:53.591 "data_offset": 2048, 00:15:53.591 "data_size": 63488 00:15:53.591 }, 00:15:53.591 { 00:15:53.591 "name": "BaseBdev3", 00:15:53.591 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:53.591 "is_configured": true, 00:15:53.591 "data_offset": 2048, 00:15:53.591 "data_size": 63488 00:15:53.591 }, 00:15:53.591 { 00:15:53.591 "name": "BaseBdev4", 00:15:53.591 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:53.591 "is_configured": true, 00:15:53.591 "data_offset": 2048, 00:15:53.591 "data_size": 63488 00:15:53.591 } 00:15:53.591 ] 00:15:53.591 }' 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.591 21:23:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.850 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.850 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.850 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.850 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.850 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.111 [2024-11-26 21:23:12.015088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.111 "name": "Existed_Raid", 00:15:54.111 "aliases": [ 00:15:54.111 "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07" 00:15:54.111 ], 00:15:54.111 "product_name": "Raid Volume", 00:15:54.111 "block_size": 512, 00:15:54.111 "num_blocks": 190464, 00:15:54.111 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:54.111 "assigned_rate_limits": { 00:15:54.111 "rw_ios_per_sec": 0, 00:15:54.111 "rw_mbytes_per_sec": 0, 00:15:54.111 "r_mbytes_per_sec": 0, 00:15:54.111 "w_mbytes_per_sec": 0 00:15:54.111 }, 00:15:54.111 "claimed": false, 00:15:54.111 "zoned": false, 00:15:54.111 "supported_io_types": { 00:15:54.111 "read": true, 00:15:54.111 "write": true, 00:15:54.111 "unmap": false, 00:15:54.111 "flush": false, 00:15:54.111 "reset": true, 00:15:54.111 "nvme_admin": false, 00:15:54.111 "nvme_io": false, 00:15:54.111 "nvme_io_md": false, 00:15:54.111 "write_zeroes": true, 00:15:54.111 "zcopy": false, 00:15:54.111 "get_zone_info": false, 00:15:54.111 "zone_management": false, 00:15:54.111 "zone_append": false, 00:15:54.111 "compare": false, 00:15:54.111 "compare_and_write": false, 00:15:54.111 "abort": false, 00:15:54.111 "seek_hole": false, 00:15:54.111 "seek_data": false, 00:15:54.111 "copy": false, 00:15:54.111 "nvme_iov_md": false 00:15:54.111 }, 00:15:54.111 "driver_specific": { 00:15:54.111 "raid": { 00:15:54.111 "uuid": "5c288ca7-8909-4d7b-ba41-23e7b1bf0c07", 00:15:54.111 "strip_size_kb": 64, 00:15:54.111 "state": "online", 00:15:54.111 "raid_level": "raid5f", 00:15:54.111 "superblock": true, 00:15:54.111 "num_base_bdevs": 4, 00:15:54.111 "num_base_bdevs_discovered": 4, 00:15:54.111 "num_base_bdevs_operational": 4, 00:15:54.111 "base_bdevs_list": [ 00:15:54.111 { 00:15:54.111 "name": "NewBaseBdev", 00:15:54.111 "uuid": "2975778b-ea62-47c3-8242-3ae4f208b456", 00:15:54.111 "is_configured": true, 00:15:54.111 "data_offset": 2048, 00:15:54.111 "data_size": 63488 00:15:54.111 }, 00:15:54.111 { 00:15:54.111 "name": "BaseBdev2", 00:15:54.111 "uuid": "a8c89794-24a9-48ac-8691-4200bb50af80", 00:15:54.111 "is_configured": true, 00:15:54.111 "data_offset": 2048, 00:15:54.111 "data_size": 63488 00:15:54.111 }, 00:15:54.111 { 00:15:54.111 "name": "BaseBdev3", 00:15:54.111 "uuid": "e0d0e454-4c22-4d30-af31-ffdfb573650d", 00:15:54.111 "is_configured": true, 00:15:54.111 "data_offset": 2048, 00:15:54.111 "data_size": 63488 00:15:54.111 }, 00:15:54.111 { 00:15:54.111 "name": "BaseBdev4", 00:15:54.111 "uuid": "024b70d8-9199-44b7-a042-3179b463b9e5", 00:15:54.111 "is_configured": true, 00:15:54.111 "data_offset": 2048, 00:15:54.111 "data_size": 63488 00:15:54.111 } 00:15:54.111 ] 00:15:54.111 } 00:15:54.111 } 00:15:54.111 }' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:54.111 BaseBdev2 00:15:54.111 BaseBdev3 00:15:54.111 BaseBdev4' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.111 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.112 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.375 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.376 [2024-11-26 21:23:12.346284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.376 [2024-11-26 21:23:12.346355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.376 [2024-11-26 21:23:12.346431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.376 [2024-11-26 21:23:12.346748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.376 [2024-11-26 21:23:12.346760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83230 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83230 ']' 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83230 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83230 00:15:54.376 killing process with pid 83230 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83230' 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83230 00:15:54.376 [2024-11-26 21:23:12.393197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.376 21:23:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83230 00:15:54.966 [2024-11-26 21:23:12.807894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.906 ************************************ 00:15:55.906 END TEST raid5f_state_function_test_sb 00:15:55.906 ************************************ 00:15:55.906 21:23:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:55.906 00:15:55.906 real 0m11.640s 00:15:55.906 user 0m18.150s 00:15:55.906 sys 0m2.296s 00:15:55.906 21:23:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.906 21:23:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.906 21:23:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:55.906 21:23:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:55.906 21:23:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.906 21:23:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.906 ************************************ 00:15:55.906 START TEST raid5f_superblock_test 00:15:55.906 ************************************ 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:55.906 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83906 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83906 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83906 ']' 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.166 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.166 [2024-11-26 21:23:14.147199] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:56.166 [2024-11-26 21:23:14.147448] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83906 ] 00:15:56.426 [2024-11-26 21:23:14.326461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.426 [2024-11-26 21:23:14.449002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.685 [2024-11-26 21:23:14.680901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.685 [2024-11-26 21:23:14.681090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.945 21:23:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.945 malloc1 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.945 [2024-11-26 21:23:15.023310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.945 [2024-11-26 21:23:15.023454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.945 [2024-11-26 21:23:15.023484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.945 [2024-11-26 21:23:15.023494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.945 [2024-11-26 21:23:15.025943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.945 [2024-11-26 21:23:15.025991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.945 pt1 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.945 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.946 malloc2 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.946 [2024-11-26 21:23:15.084305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.946 [2024-11-26 21:23:15.084419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.946 [2024-11-26 21:23:15.084467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.946 [2024-11-26 21:23:15.084496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.946 [2024-11-26 21:23:15.086855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.946 [2024-11-26 21:23:15.086923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.946 pt2 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.946 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.206 malloc3 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.206 [2024-11-26 21:23:15.161390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.206 [2024-11-26 21:23:15.161516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.206 [2024-11-26 21:23:15.161571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:57.206 [2024-11-26 21:23:15.161600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.206 [2024-11-26 21:23:15.163872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.206 [2024-11-26 21:23:15.163949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.206 pt3 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:57.206 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.207 malloc4 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.207 [2024-11-26 21:23:15.224232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:57.207 [2024-11-26 21:23:15.224291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.207 [2024-11-26 21:23:15.224328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:57.207 [2024-11-26 21:23:15.224336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.207 [2024-11-26 21:23:15.226606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.207 [2024-11-26 21:23:15.226641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:57.207 pt4 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.207 [2024-11-26 21:23:15.236255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.207 [2024-11-26 21:23:15.238232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.207 [2024-11-26 21:23:15.238311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.207 [2024-11-26 21:23:15.238355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:57.207 [2024-11-26 21:23:15.238532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:57.207 [2024-11-26 21:23:15.238547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:57.207 [2024-11-26 21:23:15.238779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.207 [2024-11-26 21:23:15.245665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:57.207 [2024-11-26 21:23:15.245688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:57.207 [2024-11-26 21:23:15.245855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.207 "name": "raid_bdev1", 00:15:57.207 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:57.207 "strip_size_kb": 64, 00:15:57.207 "state": "online", 00:15:57.207 "raid_level": "raid5f", 00:15:57.207 "superblock": true, 00:15:57.207 "num_base_bdevs": 4, 00:15:57.207 "num_base_bdevs_discovered": 4, 00:15:57.207 "num_base_bdevs_operational": 4, 00:15:57.207 "base_bdevs_list": [ 00:15:57.207 { 00:15:57.207 "name": "pt1", 00:15:57.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.207 "is_configured": true, 00:15:57.207 "data_offset": 2048, 00:15:57.207 "data_size": 63488 00:15:57.207 }, 00:15:57.207 { 00:15:57.207 "name": "pt2", 00:15:57.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.207 "is_configured": true, 00:15:57.207 "data_offset": 2048, 00:15:57.207 "data_size": 63488 00:15:57.207 }, 00:15:57.207 { 00:15:57.207 "name": "pt3", 00:15:57.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.207 "is_configured": true, 00:15:57.207 "data_offset": 2048, 00:15:57.207 "data_size": 63488 00:15:57.207 }, 00:15:57.207 { 00:15:57.207 "name": "pt4", 00:15:57.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:57.207 "is_configured": true, 00:15:57.207 "data_offset": 2048, 00:15:57.207 "data_size": 63488 00:15:57.207 } 00:15:57.207 ] 00:15:57.207 }' 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.207 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.777 [2024-11-26 21:23:15.677862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.777 "name": "raid_bdev1", 00:15:57.777 "aliases": [ 00:15:57.777 "0550fe2e-79a9-4c15-bd13-07e6ca718c5e" 00:15:57.777 ], 00:15:57.777 "product_name": "Raid Volume", 00:15:57.777 "block_size": 512, 00:15:57.777 "num_blocks": 190464, 00:15:57.777 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:57.777 "assigned_rate_limits": { 00:15:57.777 "rw_ios_per_sec": 0, 00:15:57.777 "rw_mbytes_per_sec": 0, 00:15:57.777 "r_mbytes_per_sec": 0, 00:15:57.777 "w_mbytes_per_sec": 0 00:15:57.777 }, 00:15:57.777 "claimed": false, 00:15:57.777 "zoned": false, 00:15:57.777 "supported_io_types": { 00:15:57.777 "read": true, 00:15:57.777 "write": true, 00:15:57.777 "unmap": false, 00:15:57.777 "flush": false, 00:15:57.777 "reset": true, 00:15:57.777 "nvme_admin": false, 00:15:57.777 "nvme_io": false, 00:15:57.777 "nvme_io_md": false, 00:15:57.777 "write_zeroes": true, 00:15:57.777 "zcopy": false, 00:15:57.777 "get_zone_info": false, 00:15:57.777 "zone_management": false, 00:15:57.777 "zone_append": false, 00:15:57.777 "compare": false, 00:15:57.777 "compare_and_write": false, 00:15:57.777 "abort": false, 00:15:57.777 "seek_hole": false, 00:15:57.777 "seek_data": false, 00:15:57.777 "copy": false, 00:15:57.777 "nvme_iov_md": false 00:15:57.777 }, 00:15:57.777 "driver_specific": { 00:15:57.777 "raid": { 00:15:57.777 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:57.777 "strip_size_kb": 64, 00:15:57.777 "state": "online", 00:15:57.777 "raid_level": "raid5f", 00:15:57.777 "superblock": true, 00:15:57.777 "num_base_bdevs": 4, 00:15:57.777 "num_base_bdevs_discovered": 4, 00:15:57.777 "num_base_bdevs_operational": 4, 00:15:57.777 "base_bdevs_list": [ 00:15:57.777 { 00:15:57.777 "name": "pt1", 00:15:57.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.777 "is_configured": true, 00:15:57.777 "data_offset": 2048, 00:15:57.777 "data_size": 63488 00:15:57.777 }, 00:15:57.777 { 00:15:57.777 "name": "pt2", 00:15:57.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.777 "is_configured": true, 00:15:57.777 "data_offset": 2048, 00:15:57.777 "data_size": 63488 00:15:57.777 }, 00:15:57.777 { 00:15:57.777 "name": "pt3", 00:15:57.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.777 "is_configured": true, 00:15:57.777 "data_offset": 2048, 00:15:57.777 "data_size": 63488 00:15:57.777 }, 00:15:57.777 { 00:15:57.777 "name": "pt4", 00:15:57.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:57.777 "is_configured": true, 00:15:57.777 "data_offset": 2048, 00:15:57.777 "data_size": 63488 00:15:57.777 } 00:15:57.777 ] 00:15:57.777 } 00:15:57.777 } 00:15:57.777 }' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:57.777 pt2 00:15:57.777 pt3 00:15:57.777 pt4' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.777 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.037 21:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:58.037 [2024-11-26 21:23:15.993265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.037 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.037 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0550fe2e-79a9-4c15-bd13-07e6ca718c5e 00:15:58.037 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0550fe2e-79a9-4c15-bd13-07e6ca718c5e ']' 00:15:58.037 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.037 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 [2024-11-26 21:23:16.045057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.038 [2024-11-26 21:23:16.045119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.038 [2024-11-26 21:23:16.045209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.038 [2024-11-26 21:23:16.045301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.038 [2024-11-26 21:23:16.045363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.038 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.297 [2024-11-26 21:23:16.204861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:58.297 [2024-11-26 21:23:16.206871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:58.297 [2024-11-26 21:23:16.206954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:58.297 [2024-11-26 21:23:16.207015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:58.297 [2024-11-26 21:23:16.207095] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:58.297 [2024-11-26 21:23:16.207191] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:58.297 [2024-11-26 21:23:16.207250] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:58.297 [2024-11-26 21:23:16.207307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:58.297 [2024-11-26 21:23:16.207367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.297 [2024-11-26 21:23:16.207397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:58.297 request: 00:15:58.297 { 00:15:58.297 "name": "raid_bdev1", 00:15:58.297 "raid_level": "raid5f", 00:15:58.297 "base_bdevs": [ 00:15:58.297 "malloc1", 00:15:58.297 "malloc2", 00:15:58.297 "malloc3", 00:15:58.297 "malloc4" 00:15:58.297 ], 00:15:58.297 "strip_size_kb": 64, 00:15:58.297 "superblock": false, 00:15:58.297 "method": "bdev_raid_create", 00:15:58.297 "req_id": 1 00:15:58.297 } 00:15:58.297 Got JSON-RPC error response 00:15:58.297 response: 00:15:58.297 { 00:15:58.297 "code": -17, 00:15:58.297 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:58.297 } 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.297 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.298 [2024-11-26 21:23:16.272640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.298 [2024-11-26 21:23:16.272731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.298 [2024-11-26 21:23:16.272752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:58.298 [2024-11-26 21:23:16.272763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.298 [2024-11-26 21:23:16.275186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.298 [2024-11-26 21:23:16.275223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.298 [2024-11-26 21:23:16.275306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:58.298 [2024-11-26 21:23:16.275359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.298 pt1 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.298 "name": "raid_bdev1", 00:15:58.298 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:58.298 "strip_size_kb": 64, 00:15:58.298 "state": "configuring", 00:15:58.298 "raid_level": "raid5f", 00:15:58.298 "superblock": true, 00:15:58.298 "num_base_bdevs": 4, 00:15:58.298 "num_base_bdevs_discovered": 1, 00:15:58.298 "num_base_bdevs_operational": 4, 00:15:58.298 "base_bdevs_list": [ 00:15:58.298 { 00:15:58.298 "name": "pt1", 00:15:58.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.298 "is_configured": true, 00:15:58.298 "data_offset": 2048, 00:15:58.298 "data_size": 63488 00:15:58.298 }, 00:15:58.298 { 00:15:58.298 "name": null, 00:15:58.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.298 "is_configured": false, 00:15:58.298 "data_offset": 2048, 00:15:58.298 "data_size": 63488 00:15:58.298 }, 00:15:58.298 { 00:15:58.298 "name": null, 00:15:58.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.298 "is_configured": false, 00:15:58.298 "data_offset": 2048, 00:15:58.298 "data_size": 63488 00:15:58.298 }, 00:15:58.298 { 00:15:58.298 "name": null, 00:15:58.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.298 "is_configured": false, 00:15:58.298 "data_offset": 2048, 00:15:58.298 "data_size": 63488 00:15:58.298 } 00:15:58.298 ] 00:15:58.298 }' 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.298 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:58.557 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.557 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.557 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.557 [2024-11-26 21:23:16.707946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.557 [2024-11-26 21:23:16.708064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.557 [2024-11-26 21:23:16.708100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:58.557 [2024-11-26 21:23:16.708137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.557 [2024-11-26 21:23:16.708556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.557 [2024-11-26 21:23:16.708622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.557 [2024-11-26 21:23:16.708716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.557 [2024-11-26 21:23:16.708767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.816 pt2 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.816 [2024-11-26 21:23:16.719936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.816 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.816 "name": "raid_bdev1", 00:15:58.816 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:58.816 "strip_size_kb": 64, 00:15:58.816 "state": "configuring", 00:15:58.816 "raid_level": "raid5f", 00:15:58.816 "superblock": true, 00:15:58.816 "num_base_bdevs": 4, 00:15:58.816 "num_base_bdevs_discovered": 1, 00:15:58.816 "num_base_bdevs_operational": 4, 00:15:58.816 "base_bdevs_list": [ 00:15:58.816 { 00:15:58.816 "name": "pt1", 00:15:58.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.816 "is_configured": true, 00:15:58.816 "data_offset": 2048, 00:15:58.816 "data_size": 63488 00:15:58.816 }, 00:15:58.816 { 00:15:58.816 "name": null, 00:15:58.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.816 "is_configured": false, 00:15:58.816 "data_offset": 0, 00:15:58.816 "data_size": 63488 00:15:58.816 }, 00:15:58.816 { 00:15:58.816 "name": null, 00:15:58.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.816 "is_configured": false, 00:15:58.816 "data_offset": 2048, 00:15:58.816 "data_size": 63488 00:15:58.816 }, 00:15:58.816 { 00:15:58.817 "name": null, 00:15:58.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.817 "is_configured": false, 00:15:58.817 "data_offset": 2048, 00:15:58.817 "data_size": 63488 00:15:58.817 } 00:15:58.817 ] 00:15:58.817 }' 00:15:58.817 21:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.817 21:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.077 [2024-11-26 21:23:17.195078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.077 [2024-11-26 21:23:17.195161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.077 [2024-11-26 21:23:17.195180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:59.077 [2024-11-26 21:23:17.195188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.077 [2024-11-26 21:23:17.195543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.077 [2024-11-26 21:23:17.195558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.077 [2024-11-26 21:23:17.195617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:59.077 [2024-11-26 21:23:17.195633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.077 pt2 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.077 [2024-11-26 21:23:17.207070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.077 [2024-11-26 21:23:17.207113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.077 [2024-11-26 21:23:17.207135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:59.077 [2024-11-26 21:23:17.207145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.077 [2024-11-26 21:23:17.207468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.077 [2024-11-26 21:23:17.207483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.077 [2024-11-26 21:23:17.207534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:59.077 [2024-11-26 21:23:17.207556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.077 pt3 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.077 [2024-11-26 21:23:17.219050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:59.077 [2024-11-26 21:23:17.219087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.077 [2024-11-26 21:23:17.219118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:59.077 [2024-11-26 21:23:17.219125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.077 [2024-11-26 21:23:17.219483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.077 [2024-11-26 21:23:17.219497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:59.077 [2024-11-26 21:23:17.219551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:59.077 [2024-11-26 21:23:17.219571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:59.077 [2024-11-26 21:23:17.219703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:59.077 [2024-11-26 21:23:17.219711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:59.077 [2024-11-26 21:23:17.219967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:59.077 pt4 00:15:59.077 [2024-11-26 21:23:17.227025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:59.077 [2024-11-26 21:23:17.227047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:59.077 [2024-11-26 21:23:17.227219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.077 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.337 "name": "raid_bdev1", 00:15:59.337 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:59.337 "strip_size_kb": 64, 00:15:59.337 "state": "online", 00:15:59.337 "raid_level": "raid5f", 00:15:59.337 "superblock": true, 00:15:59.337 "num_base_bdevs": 4, 00:15:59.337 "num_base_bdevs_discovered": 4, 00:15:59.337 "num_base_bdevs_operational": 4, 00:15:59.337 "base_bdevs_list": [ 00:15:59.337 { 00:15:59.337 "name": "pt1", 00:15:59.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.337 "is_configured": true, 00:15:59.337 "data_offset": 2048, 00:15:59.337 "data_size": 63488 00:15:59.337 }, 00:15:59.337 { 00:15:59.337 "name": "pt2", 00:15:59.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.337 "is_configured": true, 00:15:59.337 "data_offset": 2048, 00:15:59.337 "data_size": 63488 00:15:59.337 }, 00:15:59.337 { 00:15:59.337 "name": "pt3", 00:15:59.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.337 "is_configured": true, 00:15:59.337 "data_offset": 2048, 00:15:59.337 "data_size": 63488 00:15:59.337 }, 00:15:59.337 { 00:15:59.337 "name": "pt4", 00:15:59.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.337 "is_configured": true, 00:15:59.337 "data_offset": 2048, 00:15:59.337 "data_size": 63488 00:15:59.337 } 00:15:59.337 ] 00:15:59.337 }' 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.337 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.597 [2024-11-26 21:23:17.692168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.597 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.597 "name": "raid_bdev1", 00:15:59.597 "aliases": [ 00:15:59.597 "0550fe2e-79a9-4c15-bd13-07e6ca718c5e" 00:15:59.597 ], 00:15:59.597 "product_name": "Raid Volume", 00:15:59.597 "block_size": 512, 00:15:59.597 "num_blocks": 190464, 00:15:59.597 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:59.597 "assigned_rate_limits": { 00:15:59.597 "rw_ios_per_sec": 0, 00:15:59.597 "rw_mbytes_per_sec": 0, 00:15:59.597 "r_mbytes_per_sec": 0, 00:15:59.597 "w_mbytes_per_sec": 0 00:15:59.597 }, 00:15:59.597 "claimed": false, 00:15:59.597 "zoned": false, 00:15:59.597 "supported_io_types": { 00:15:59.597 "read": true, 00:15:59.597 "write": true, 00:15:59.597 "unmap": false, 00:15:59.597 "flush": false, 00:15:59.597 "reset": true, 00:15:59.597 "nvme_admin": false, 00:15:59.597 "nvme_io": false, 00:15:59.597 "nvme_io_md": false, 00:15:59.597 "write_zeroes": true, 00:15:59.597 "zcopy": false, 00:15:59.597 "get_zone_info": false, 00:15:59.597 "zone_management": false, 00:15:59.597 "zone_append": false, 00:15:59.597 "compare": false, 00:15:59.597 "compare_and_write": false, 00:15:59.597 "abort": false, 00:15:59.597 "seek_hole": false, 00:15:59.597 "seek_data": false, 00:15:59.597 "copy": false, 00:15:59.597 "nvme_iov_md": false 00:15:59.597 }, 00:15:59.597 "driver_specific": { 00:15:59.597 "raid": { 00:15:59.597 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:15:59.597 "strip_size_kb": 64, 00:15:59.597 "state": "online", 00:15:59.597 "raid_level": "raid5f", 00:15:59.597 "superblock": true, 00:15:59.597 "num_base_bdevs": 4, 00:15:59.597 "num_base_bdevs_discovered": 4, 00:15:59.597 "num_base_bdevs_operational": 4, 00:15:59.597 "base_bdevs_list": [ 00:15:59.597 { 00:15:59.597 "name": "pt1", 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.597 "is_configured": true, 00:15:59.597 "data_offset": 2048, 00:15:59.597 "data_size": 63488 00:15:59.597 }, 00:15:59.597 { 00:15:59.597 "name": "pt2", 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.597 "is_configured": true, 00:15:59.597 "data_offset": 2048, 00:15:59.597 "data_size": 63488 00:15:59.597 }, 00:15:59.597 { 00:15:59.597 "name": "pt3", 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.597 "is_configured": true, 00:15:59.597 "data_offset": 2048, 00:15:59.597 "data_size": 63488 00:15:59.597 }, 00:15:59.597 { 00:15:59.597 "name": "pt4", 00:15:59.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.597 "is_configured": true, 00:15:59.598 "data_offset": 2048, 00:15:59.598 "data_size": 63488 00:15:59.598 } 00:15:59.598 ] 00:15:59.598 } 00:15:59.598 } 00:15:59.598 }' 00:15:59.598 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.857 pt2 00:15:59.857 pt3 00:15:59.857 pt4' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 21:23:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.857 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.857 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.117 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.117 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.117 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:00.117 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.117 [2024-11-26 21:23:18.023494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0550fe2e-79a9-4c15-bd13-07e6ca718c5e '!=' 0550fe2e-79a9-4c15-bd13-07e6ca718c5e ']' 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.118 [2024-11-26 21:23:18.067327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.118 "name": "raid_bdev1", 00:16:00.118 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:16:00.118 "strip_size_kb": 64, 00:16:00.118 "state": "online", 00:16:00.118 "raid_level": "raid5f", 00:16:00.118 "superblock": true, 00:16:00.118 "num_base_bdevs": 4, 00:16:00.118 "num_base_bdevs_discovered": 3, 00:16:00.118 "num_base_bdevs_operational": 3, 00:16:00.118 "base_bdevs_list": [ 00:16:00.118 { 00:16:00.118 "name": null, 00:16:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.118 "is_configured": false, 00:16:00.118 "data_offset": 0, 00:16:00.118 "data_size": 63488 00:16:00.118 }, 00:16:00.118 { 00:16:00.118 "name": "pt2", 00:16:00.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.118 "is_configured": true, 00:16:00.118 "data_offset": 2048, 00:16:00.118 "data_size": 63488 00:16:00.118 }, 00:16:00.118 { 00:16:00.118 "name": "pt3", 00:16:00.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.118 "is_configured": true, 00:16:00.118 "data_offset": 2048, 00:16:00.118 "data_size": 63488 00:16:00.118 }, 00:16:00.118 { 00:16:00.118 "name": "pt4", 00:16:00.118 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.118 "is_configured": true, 00:16:00.118 "data_offset": 2048, 00:16:00.118 "data_size": 63488 00:16:00.118 } 00:16:00.118 ] 00:16:00.118 }' 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.118 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.378 [2024-11-26 21:23:18.482589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.378 [2024-11-26 21:23:18.482659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.378 [2024-11-26 21:23:18.482744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.378 [2024-11-26 21:23:18.482829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.378 [2024-11-26 21:23:18.482880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.378 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.379 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.638 [2024-11-26 21:23:18.562457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.638 [2024-11-26 21:23:18.562547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.638 [2024-11-26 21:23:18.562569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:00.638 [2024-11-26 21:23:18.562578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.638 [2024-11-26 21:23:18.565068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.638 [2024-11-26 21:23:18.565103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.638 [2024-11-26 21:23:18.565182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.638 [2024-11-26 21:23:18.565231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.638 pt2 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.638 "name": "raid_bdev1", 00:16:00.638 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:16:00.638 "strip_size_kb": 64, 00:16:00.638 "state": "configuring", 00:16:00.638 "raid_level": "raid5f", 00:16:00.638 "superblock": true, 00:16:00.638 "num_base_bdevs": 4, 00:16:00.638 "num_base_bdevs_discovered": 1, 00:16:00.638 "num_base_bdevs_operational": 3, 00:16:00.638 "base_bdevs_list": [ 00:16:00.638 { 00:16:00.638 "name": null, 00:16:00.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.638 "is_configured": false, 00:16:00.638 "data_offset": 2048, 00:16:00.638 "data_size": 63488 00:16:00.638 }, 00:16:00.638 { 00:16:00.638 "name": "pt2", 00:16:00.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.638 "is_configured": true, 00:16:00.638 "data_offset": 2048, 00:16:00.638 "data_size": 63488 00:16:00.638 }, 00:16:00.638 { 00:16:00.638 "name": null, 00:16:00.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.638 "is_configured": false, 00:16:00.638 "data_offset": 2048, 00:16:00.638 "data_size": 63488 00:16:00.638 }, 00:16:00.638 { 00:16:00.638 "name": null, 00:16:00.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.638 "is_configured": false, 00:16:00.638 "data_offset": 2048, 00:16:00.638 "data_size": 63488 00:16:00.638 } 00:16:00.638 ] 00:16:00.638 }' 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.638 21:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.898 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:00.898 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.898 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:00.898 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.898 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.898 [2024-11-26 21:23:19.021651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:00.898 [2024-11-26 21:23:19.021757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.898 [2024-11-26 21:23:19.021798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:00.898 [2024-11-26 21:23:19.021825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.898 [2024-11-26 21:23:19.022225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.898 [2024-11-26 21:23:19.022275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:00.898 [2024-11-26 21:23:19.022369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:00.898 [2024-11-26 21:23:19.022413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:00.898 pt3 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.899 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.158 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.158 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.159 "name": "raid_bdev1", 00:16:01.159 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:16:01.159 "strip_size_kb": 64, 00:16:01.159 "state": "configuring", 00:16:01.159 "raid_level": "raid5f", 00:16:01.159 "superblock": true, 00:16:01.159 "num_base_bdevs": 4, 00:16:01.159 "num_base_bdevs_discovered": 2, 00:16:01.159 "num_base_bdevs_operational": 3, 00:16:01.159 "base_bdevs_list": [ 00:16:01.159 { 00:16:01.159 "name": null, 00:16:01.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.159 "is_configured": false, 00:16:01.159 "data_offset": 2048, 00:16:01.159 "data_size": 63488 00:16:01.159 }, 00:16:01.159 { 00:16:01.159 "name": "pt2", 00:16:01.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.159 "is_configured": true, 00:16:01.159 "data_offset": 2048, 00:16:01.159 "data_size": 63488 00:16:01.159 }, 00:16:01.159 { 00:16:01.159 "name": "pt3", 00:16:01.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.159 "is_configured": true, 00:16:01.159 "data_offset": 2048, 00:16:01.159 "data_size": 63488 00:16:01.159 }, 00:16:01.159 { 00:16:01.159 "name": null, 00:16:01.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.159 "is_configured": false, 00:16:01.159 "data_offset": 2048, 00:16:01.159 "data_size": 63488 00:16:01.159 } 00:16:01.159 ] 00:16:01.159 }' 00:16:01.159 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.159 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.418 [2024-11-26 21:23:19.460904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:01.418 [2024-11-26 21:23:19.460951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.418 [2024-11-26 21:23:19.460996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:01.418 [2024-11-26 21:23:19.461005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.418 [2024-11-26 21:23:19.461390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.418 [2024-11-26 21:23:19.461407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:01.418 [2024-11-26 21:23:19.461475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:01.418 [2024-11-26 21:23:19.461500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:01.418 [2024-11-26 21:23:19.461613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:01.418 [2024-11-26 21:23:19.461631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:01.418 [2024-11-26 21:23:19.461897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:01.418 [2024-11-26 21:23:19.468758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:01.418 pt4 00:16:01.418 [2024-11-26 21:23:19.468822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:01.418 [2024-11-26 21:23:19.469112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.418 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.419 "name": "raid_bdev1", 00:16:01.419 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:16:01.419 "strip_size_kb": 64, 00:16:01.419 "state": "online", 00:16:01.419 "raid_level": "raid5f", 00:16:01.419 "superblock": true, 00:16:01.419 "num_base_bdevs": 4, 00:16:01.419 "num_base_bdevs_discovered": 3, 00:16:01.419 "num_base_bdevs_operational": 3, 00:16:01.419 "base_bdevs_list": [ 00:16:01.419 { 00:16:01.419 "name": null, 00:16:01.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.419 "is_configured": false, 00:16:01.419 "data_offset": 2048, 00:16:01.419 "data_size": 63488 00:16:01.419 }, 00:16:01.419 { 00:16:01.419 "name": "pt2", 00:16:01.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.419 "is_configured": true, 00:16:01.419 "data_offset": 2048, 00:16:01.419 "data_size": 63488 00:16:01.419 }, 00:16:01.419 { 00:16:01.419 "name": "pt3", 00:16:01.419 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.419 "is_configured": true, 00:16:01.419 "data_offset": 2048, 00:16:01.419 "data_size": 63488 00:16:01.419 }, 00:16:01.419 { 00:16:01.419 "name": "pt4", 00:16:01.419 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.419 "is_configured": true, 00:16:01.419 "data_offset": 2048, 00:16:01.419 "data_size": 63488 00:16:01.419 } 00:16:01.419 ] 00:16:01.419 }' 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.419 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 [2024-11-26 21:23:19.969394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.988 [2024-11-26 21:23:19.969471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.988 [2024-11-26 21:23:19.969555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.988 [2024-11-26 21:23:19.969638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.988 [2024-11-26 21:23:19.969700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 21:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 [2024-11-26 21:23:20.029304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.988 [2024-11-26 21:23:20.029363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.988 [2024-11-26 21:23:20.029387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:01.988 [2024-11-26 21:23:20.029401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.988 [2024-11-26 21:23:20.031812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.988 [2024-11-26 21:23:20.031892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.988 [2024-11-26 21:23:20.031981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.988 [2024-11-26 21:23:20.032029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.988 [2024-11-26 21:23:20.032190] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:01.988 [2024-11-26 21:23:20.032203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.988 [2024-11-26 21:23:20.032219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:01.988 [2024-11-26 21:23:20.032285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.988 [2024-11-26 21:23:20.032388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.988 pt1 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.988 "name": "raid_bdev1", 00:16:01.988 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:16:01.988 "strip_size_kb": 64, 00:16:01.988 "state": "configuring", 00:16:01.988 "raid_level": "raid5f", 00:16:01.988 "superblock": true, 00:16:01.988 "num_base_bdevs": 4, 00:16:01.988 "num_base_bdevs_discovered": 2, 00:16:01.988 "num_base_bdevs_operational": 3, 00:16:01.988 "base_bdevs_list": [ 00:16:01.988 { 00:16:01.988 "name": null, 00:16:01.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.988 "is_configured": false, 00:16:01.988 "data_offset": 2048, 00:16:01.988 "data_size": 63488 00:16:01.988 }, 00:16:01.988 { 00:16:01.988 "name": "pt2", 00:16:01.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.988 "is_configured": true, 00:16:01.988 "data_offset": 2048, 00:16:01.988 "data_size": 63488 00:16:01.988 }, 00:16:01.988 { 00:16:01.988 "name": "pt3", 00:16:01.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.988 "is_configured": true, 00:16:01.988 "data_offset": 2048, 00:16:01.988 "data_size": 63488 00:16:01.988 }, 00:16:01.988 { 00:16:01.988 "name": null, 00:16:01.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.988 "is_configured": false, 00:16:01.988 "data_offset": 2048, 00:16:01.988 "data_size": 63488 00:16:01.988 } 00:16:01.988 ] 00:16:01.988 }' 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.988 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.559 [2024-11-26 21:23:20.516466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:02.559 [2024-11-26 21:23:20.516555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.559 [2024-11-26 21:23:20.516607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:02.559 [2024-11-26 21:23:20.516634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.559 [2024-11-26 21:23:20.517054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.559 [2024-11-26 21:23:20.517111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:02.559 [2024-11-26 21:23:20.517203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:02.559 [2024-11-26 21:23:20.517250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:02.559 [2024-11-26 21:23:20.517403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:02.559 [2024-11-26 21:23:20.517439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.559 [2024-11-26 21:23:20.517725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:02.559 [2024-11-26 21:23:20.524435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:02.559 [2024-11-26 21:23:20.524491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:02.559 [2024-11-26 21:23:20.524765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.559 pt4 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.559 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.560 "name": "raid_bdev1", 00:16:02.560 "uuid": "0550fe2e-79a9-4c15-bd13-07e6ca718c5e", 00:16:02.560 "strip_size_kb": 64, 00:16:02.560 "state": "online", 00:16:02.560 "raid_level": "raid5f", 00:16:02.560 "superblock": true, 00:16:02.560 "num_base_bdevs": 4, 00:16:02.560 "num_base_bdevs_discovered": 3, 00:16:02.560 "num_base_bdevs_operational": 3, 00:16:02.560 "base_bdevs_list": [ 00:16:02.560 { 00:16:02.560 "name": null, 00:16:02.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.560 "is_configured": false, 00:16:02.560 "data_offset": 2048, 00:16:02.560 "data_size": 63488 00:16:02.560 }, 00:16:02.560 { 00:16:02.560 "name": "pt2", 00:16:02.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.560 "is_configured": true, 00:16:02.560 "data_offset": 2048, 00:16:02.560 "data_size": 63488 00:16:02.560 }, 00:16:02.560 { 00:16:02.560 "name": "pt3", 00:16:02.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.560 "is_configured": true, 00:16:02.560 "data_offset": 2048, 00:16:02.560 "data_size": 63488 00:16:02.560 }, 00:16:02.560 { 00:16:02.560 "name": "pt4", 00:16:02.560 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.560 "is_configured": true, 00:16:02.560 "data_offset": 2048, 00:16:02.560 "data_size": 63488 00:16:02.560 } 00:16:02.560 ] 00:16:02.560 }' 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.560 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:03.130 21:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:03.130 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.130 21:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 [2024-11-26 21:23:21.041071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0550fe2e-79a9-4c15-bd13-07e6ca718c5e '!=' 0550fe2e-79a9-4c15-bd13-07e6ca718c5e ']' 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83906 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83906 ']' 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83906 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83906 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83906' 00:16:03.130 killing process with pid 83906 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83906 00:16:03.130 [2024-11-26 21:23:21.122809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.130 21:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83906 00:16:03.130 [2024-11-26 21:23:21.122886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.130 [2024-11-26 21:23:21.122973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.130 [2024-11-26 21:23:21.122995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:03.390 [2024-11-26 21:23:21.535283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.772 21:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:04.772 00:16:04.772 real 0m8.653s 00:16:04.772 user 0m13.396s 00:16:04.772 sys 0m1.710s 00:16:04.772 21:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.772 21:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.772 ************************************ 00:16:04.772 END TEST raid5f_superblock_test 00:16:04.772 ************************************ 00:16:04.772 21:23:22 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:04.772 21:23:22 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:04.772 21:23:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:04.772 21:23:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.772 21:23:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.772 ************************************ 00:16:04.772 START TEST raid5f_rebuild_test 00:16:04.772 ************************************ 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84392 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84392 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84392 ']' 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.772 21:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.772 [2024-11-26 21:23:22.891035] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:04.772 [2024-11-26 21:23:22.891270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84392 ] 00:16:04.772 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:04.772 Zero copy mechanism will not be used. 00:16:05.032 [2024-11-26 21:23:23.053301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.032 [2024-11-26 21:23:23.186463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.292 [2024-11-26 21:23:23.405303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.292 [2024-11-26 21:23:23.405478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.552 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.552 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:05.552 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.552 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:05.552 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.552 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 BaseBdev1_malloc 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 [2024-11-26 21:23:23.755450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:05.813 [2024-11-26 21:23:23.755523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.813 [2024-11-26 21:23:23.755546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:05.813 [2024-11-26 21:23:23.755558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.813 [2024-11-26 21:23:23.757887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.813 [2024-11-26 21:23:23.758028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:05.813 BaseBdev1 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 BaseBdev2_malloc 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 [2024-11-26 21:23:23.815436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:05.813 [2024-11-26 21:23:23.815564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.813 [2024-11-26 21:23:23.815605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:05.813 [2024-11-26 21:23:23.815635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.813 [2024-11-26 21:23:23.817944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.813 [2024-11-26 21:23:23.818029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:05.813 BaseBdev2 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 BaseBdev3_malloc 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.813 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 [2024-11-26 21:23:23.886963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:05.813 [2024-11-26 21:23:23.887031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.814 [2024-11-26 21:23:23.887052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:05.814 [2024-11-26 21:23:23.887065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.814 [2024-11-26 21:23:23.889395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.814 [2024-11-26 21:23:23.889433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:05.814 BaseBdev3 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.814 BaseBdev4_malloc 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.814 [2024-11-26 21:23:23.948751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:05.814 [2024-11-26 21:23:23.948868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.814 [2024-11-26 21:23:23.948910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:05.814 [2024-11-26 21:23:23.948943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.814 [2024-11-26 21:23:23.951201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.814 [2024-11-26 21:23:23.951273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:05.814 BaseBdev4 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.814 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.074 spare_malloc 00:16:06.074 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.074 21:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:06.074 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.074 21:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.074 spare_delay 00:16:06.074 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.074 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.074 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.074 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.074 [2024-11-26 21:23:24.017870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.074 [2024-11-26 21:23:24.017978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.074 [2024-11-26 21:23:24.017999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:06.074 [2024-11-26 21:23:24.018010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.074 [2024-11-26 21:23:24.020251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.074 [2024-11-26 21:23:24.020289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.074 spare 00:16:06.074 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.074 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.075 [2024-11-26 21:23:24.029901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.075 [2024-11-26 21:23:24.031906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.075 [2024-11-26 21:23:24.031977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.075 [2024-11-26 21:23:24.032028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.075 [2024-11-26 21:23:24.032112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:06.075 [2024-11-26 21:23:24.032131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:06.075 [2024-11-26 21:23:24.032408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:06.075 [2024-11-26 21:23:24.039204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:06.075 [2024-11-26 21:23:24.039273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:06.075 [2024-11-26 21:23:24.039452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.075 "name": "raid_bdev1", 00:16:06.075 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:06.075 "strip_size_kb": 64, 00:16:06.075 "state": "online", 00:16:06.075 "raid_level": "raid5f", 00:16:06.075 "superblock": false, 00:16:06.075 "num_base_bdevs": 4, 00:16:06.075 "num_base_bdevs_discovered": 4, 00:16:06.075 "num_base_bdevs_operational": 4, 00:16:06.075 "base_bdevs_list": [ 00:16:06.075 { 00:16:06.075 "name": "BaseBdev1", 00:16:06.075 "uuid": "c2303496-34ea-59ed-b00e-b56dd689a71e", 00:16:06.075 "is_configured": true, 00:16:06.075 "data_offset": 0, 00:16:06.075 "data_size": 65536 00:16:06.075 }, 00:16:06.075 { 00:16:06.075 "name": "BaseBdev2", 00:16:06.075 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:06.075 "is_configured": true, 00:16:06.075 "data_offset": 0, 00:16:06.075 "data_size": 65536 00:16:06.075 }, 00:16:06.075 { 00:16:06.075 "name": "BaseBdev3", 00:16:06.075 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:06.075 "is_configured": true, 00:16:06.075 "data_offset": 0, 00:16:06.075 "data_size": 65536 00:16:06.075 }, 00:16:06.075 { 00:16:06.075 "name": "BaseBdev4", 00:16:06.075 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:06.075 "is_configured": true, 00:16:06.075 "data_offset": 0, 00:16:06.075 "data_size": 65536 00:16:06.075 } 00:16:06.075 ] 00:16:06.075 }' 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.075 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.335 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.335 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:06.596 [2024-11-26 21:23:24.495919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.596 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:06.856 [2024-11-26 21:23:24.759369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:06.856 /dev/nbd0 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.856 1+0 records in 00:16:06.856 1+0 records out 00:16:06.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384599 s, 10.7 MB/s 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:06.856 21:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:07.433 512+0 records in 00:16:07.433 512+0 records out 00:16:07.433 100663296 bytes (101 MB, 96 MiB) copied, 0.518697 s, 194 MB/s 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.433 [2024-11-26 21:23:25.558255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.433 [2024-11-26 21:23:25.580086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.433 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.693 "name": "raid_bdev1", 00:16:07.693 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:07.693 "strip_size_kb": 64, 00:16:07.693 "state": "online", 00:16:07.693 "raid_level": "raid5f", 00:16:07.693 "superblock": false, 00:16:07.693 "num_base_bdevs": 4, 00:16:07.693 "num_base_bdevs_discovered": 3, 00:16:07.693 "num_base_bdevs_operational": 3, 00:16:07.693 "base_bdevs_list": [ 00:16:07.693 { 00:16:07.693 "name": null, 00:16:07.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.693 "is_configured": false, 00:16:07.693 "data_offset": 0, 00:16:07.693 "data_size": 65536 00:16:07.693 }, 00:16:07.693 { 00:16:07.693 "name": "BaseBdev2", 00:16:07.693 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:07.693 "is_configured": true, 00:16:07.693 "data_offset": 0, 00:16:07.693 "data_size": 65536 00:16:07.693 }, 00:16:07.693 { 00:16:07.693 "name": "BaseBdev3", 00:16:07.693 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:07.693 "is_configured": true, 00:16:07.693 "data_offset": 0, 00:16:07.693 "data_size": 65536 00:16:07.693 }, 00:16:07.693 { 00:16:07.693 "name": "BaseBdev4", 00:16:07.693 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:07.693 "is_configured": true, 00:16:07.693 "data_offset": 0, 00:16:07.693 "data_size": 65536 00:16:07.693 } 00:16:07.693 ] 00:16:07.693 }' 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.693 21:23:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.953 21:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.953 21:23:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.953 21:23:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.953 [2024-11-26 21:23:26.035249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.953 [2024-11-26 21:23:26.050720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:07.953 21:23:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.953 21:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.953 [2024-11-26 21:23:26.059577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.334 "name": "raid_bdev1", 00:16:09.334 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:09.334 "strip_size_kb": 64, 00:16:09.334 "state": "online", 00:16:09.334 "raid_level": "raid5f", 00:16:09.334 "superblock": false, 00:16:09.334 "num_base_bdevs": 4, 00:16:09.334 "num_base_bdevs_discovered": 4, 00:16:09.334 "num_base_bdevs_operational": 4, 00:16:09.334 "process": { 00:16:09.334 "type": "rebuild", 00:16:09.334 "target": "spare", 00:16:09.334 "progress": { 00:16:09.334 "blocks": 19200, 00:16:09.334 "percent": 9 00:16:09.334 } 00:16:09.334 }, 00:16:09.334 "base_bdevs_list": [ 00:16:09.334 { 00:16:09.334 "name": "spare", 00:16:09.334 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:09.334 "is_configured": true, 00:16:09.334 "data_offset": 0, 00:16:09.334 "data_size": 65536 00:16:09.334 }, 00:16:09.334 { 00:16:09.334 "name": "BaseBdev2", 00:16:09.334 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:09.334 "is_configured": true, 00:16:09.334 "data_offset": 0, 00:16:09.334 "data_size": 65536 00:16:09.334 }, 00:16:09.334 { 00:16:09.334 "name": "BaseBdev3", 00:16:09.334 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:09.334 "is_configured": true, 00:16:09.334 "data_offset": 0, 00:16:09.334 "data_size": 65536 00:16:09.334 }, 00:16:09.334 { 00:16:09.334 "name": "BaseBdev4", 00:16:09.334 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:09.334 "is_configured": true, 00:16:09.334 "data_offset": 0, 00:16:09.334 "data_size": 65536 00:16:09.334 } 00:16:09.334 ] 00:16:09.334 }' 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.334 [2024-11-26 21:23:27.222753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.334 [2024-11-26 21:23:27.266553] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.334 [2024-11-26 21:23:27.266678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.334 [2024-11-26 21:23:27.266718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.334 [2024-11-26 21:23:27.266742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.334 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.334 "name": "raid_bdev1", 00:16:09.334 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:09.334 "strip_size_kb": 64, 00:16:09.334 "state": "online", 00:16:09.334 "raid_level": "raid5f", 00:16:09.334 "superblock": false, 00:16:09.334 "num_base_bdevs": 4, 00:16:09.334 "num_base_bdevs_discovered": 3, 00:16:09.335 "num_base_bdevs_operational": 3, 00:16:09.335 "base_bdevs_list": [ 00:16:09.335 { 00:16:09.335 "name": null, 00:16:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.335 "is_configured": false, 00:16:09.335 "data_offset": 0, 00:16:09.335 "data_size": 65536 00:16:09.335 }, 00:16:09.335 { 00:16:09.335 "name": "BaseBdev2", 00:16:09.335 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:09.335 "is_configured": true, 00:16:09.335 "data_offset": 0, 00:16:09.335 "data_size": 65536 00:16:09.335 }, 00:16:09.335 { 00:16:09.335 "name": "BaseBdev3", 00:16:09.335 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:09.335 "is_configured": true, 00:16:09.335 "data_offset": 0, 00:16:09.335 "data_size": 65536 00:16:09.335 }, 00:16:09.335 { 00:16:09.335 "name": "BaseBdev4", 00:16:09.335 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:09.335 "is_configured": true, 00:16:09.335 "data_offset": 0, 00:16:09.335 "data_size": 65536 00:16:09.335 } 00:16:09.335 ] 00:16:09.335 }' 00:16:09.335 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.335 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.904 "name": "raid_bdev1", 00:16:09.904 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:09.904 "strip_size_kb": 64, 00:16:09.904 "state": "online", 00:16:09.904 "raid_level": "raid5f", 00:16:09.904 "superblock": false, 00:16:09.904 "num_base_bdevs": 4, 00:16:09.904 "num_base_bdevs_discovered": 3, 00:16:09.904 "num_base_bdevs_operational": 3, 00:16:09.904 "base_bdevs_list": [ 00:16:09.904 { 00:16:09.904 "name": null, 00:16:09.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.904 "is_configured": false, 00:16:09.904 "data_offset": 0, 00:16:09.904 "data_size": 65536 00:16:09.904 }, 00:16:09.904 { 00:16:09.904 "name": "BaseBdev2", 00:16:09.904 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:09.904 "is_configured": true, 00:16:09.904 "data_offset": 0, 00:16:09.904 "data_size": 65536 00:16:09.904 }, 00:16:09.904 { 00:16:09.904 "name": "BaseBdev3", 00:16:09.904 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:09.904 "is_configured": true, 00:16:09.904 "data_offset": 0, 00:16:09.904 "data_size": 65536 00:16:09.904 }, 00:16:09.904 { 00:16:09.904 "name": "BaseBdev4", 00:16:09.904 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:09.904 "is_configured": true, 00:16:09.904 "data_offset": 0, 00:16:09.904 "data_size": 65536 00:16:09.904 } 00:16:09.904 ] 00:16:09.904 }' 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.904 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.905 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.905 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.905 [2024-11-26 21:23:27.953769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.905 [2024-11-26 21:23:27.967303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:09.905 21:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.905 21:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:09.905 [2024-11-26 21:23:27.975861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.844 21:23:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.104 21:23:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.104 "name": "raid_bdev1", 00:16:11.104 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:11.104 "strip_size_kb": 64, 00:16:11.104 "state": "online", 00:16:11.104 "raid_level": "raid5f", 00:16:11.104 "superblock": false, 00:16:11.104 "num_base_bdevs": 4, 00:16:11.104 "num_base_bdevs_discovered": 4, 00:16:11.104 "num_base_bdevs_operational": 4, 00:16:11.104 "process": { 00:16:11.104 "type": "rebuild", 00:16:11.104 "target": "spare", 00:16:11.104 "progress": { 00:16:11.104 "blocks": 19200, 00:16:11.104 "percent": 9 00:16:11.104 } 00:16:11.104 }, 00:16:11.104 "base_bdevs_list": [ 00:16:11.104 { 00:16:11.104 "name": "spare", 00:16:11.104 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:11.104 "is_configured": true, 00:16:11.104 "data_offset": 0, 00:16:11.104 "data_size": 65536 00:16:11.104 }, 00:16:11.104 { 00:16:11.104 "name": "BaseBdev2", 00:16:11.104 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:11.104 "is_configured": true, 00:16:11.104 "data_offset": 0, 00:16:11.104 "data_size": 65536 00:16:11.104 }, 00:16:11.104 { 00:16:11.104 "name": "BaseBdev3", 00:16:11.104 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:11.104 "is_configured": true, 00:16:11.104 "data_offset": 0, 00:16:11.104 "data_size": 65536 00:16:11.104 }, 00:16:11.104 { 00:16:11.104 "name": "BaseBdev4", 00:16:11.104 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:11.104 "is_configured": true, 00:16:11.104 "data_offset": 0, 00:16:11.104 "data_size": 65536 00:16:11.104 } 00:16:11.104 ] 00:16:11.104 }' 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.104 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.104 "name": "raid_bdev1", 00:16:11.104 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:11.104 "strip_size_kb": 64, 00:16:11.104 "state": "online", 00:16:11.104 "raid_level": "raid5f", 00:16:11.104 "superblock": false, 00:16:11.104 "num_base_bdevs": 4, 00:16:11.105 "num_base_bdevs_discovered": 4, 00:16:11.105 "num_base_bdevs_operational": 4, 00:16:11.105 "process": { 00:16:11.105 "type": "rebuild", 00:16:11.105 "target": "spare", 00:16:11.105 "progress": { 00:16:11.105 "blocks": 21120, 00:16:11.105 "percent": 10 00:16:11.105 } 00:16:11.105 }, 00:16:11.105 "base_bdevs_list": [ 00:16:11.105 { 00:16:11.105 "name": "spare", 00:16:11.105 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:11.105 "is_configured": true, 00:16:11.105 "data_offset": 0, 00:16:11.105 "data_size": 65536 00:16:11.105 }, 00:16:11.105 { 00:16:11.105 "name": "BaseBdev2", 00:16:11.105 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:11.105 "is_configured": true, 00:16:11.105 "data_offset": 0, 00:16:11.105 "data_size": 65536 00:16:11.105 }, 00:16:11.105 { 00:16:11.105 "name": "BaseBdev3", 00:16:11.105 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:11.105 "is_configured": true, 00:16:11.105 "data_offset": 0, 00:16:11.105 "data_size": 65536 00:16:11.105 }, 00:16:11.105 { 00:16:11.105 "name": "BaseBdev4", 00:16:11.105 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:11.105 "is_configured": true, 00:16:11.105 "data_offset": 0, 00:16:11.105 "data_size": 65536 00:16:11.105 } 00:16:11.105 ] 00:16:11.105 }' 00:16:11.105 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.105 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.105 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.105 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.105 21:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.484 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.484 "name": "raid_bdev1", 00:16:12.484 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:12.484 "strip_size_kb": 64, 00:16:12.484 "state": "online", 00:16:12.484 "raid_level": "raid5f", 00:16:12.484 "superblock": false, 00:16:12.484 "num_base_bdevs": 4, 00:16:12.484 "num_base_bdevs_discovered": 4, 00:16:12.484 "num_base_bdevs_operational": 4, 00:16:12.484 "process": { 00:16:12.484 "type": "rebuild", 00:16:12.484 "target": "spare", 00:16:12.484 "progress": { 00:16:12.484 "blocks": 42240, 00:16:12.484 "percent": 21 00:16:12.484 } 00:16:12.484 }, 00:16:12.484 "base_bdevs_list": [ 00:16:12.484 { 00:16:12.484 "name": "spare", 00:16:12.484 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:12.484 "is_configured": true, 00:16:12.484 "data_offset": 0, 00:16:12.484 "data_size": 65536 00:16:12.484 }, 00:16:12.484 { 00:16:12.484 "name": "BaseBdev2", 00:16:12.484 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:12.484 "is_configured": true, 00:16:12.484 "data_offset": 0, 00:16:12.484 "data_size": 65536 00:16:12.484 }, 00:16:12.484 { 00:16:12.484 "name": "BaseBdev3", 00:16:12.484 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:12.484 "is_configured": true, 00:16:12.484 "data_offset": 0, 00:16:12.484 "data_size": 65536 00:16:12.484 }, 00:16:12.484 { 00:16:12.485 "name": "BaseBdev4", 00:16:12.485 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:12.485 "is_configured": true, 00:16:12.485 "data_offset": 0, 00:16:12.485 "data_size": 65536 00:16:12.485 } 00:16:12.485 ] 00:16:12.485 }' 00:16:12.485 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.485 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.485 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.485 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.485 21:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.424 "name": "raid_bdev1", 00:16:13.424 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:13.424 "strip_size_kb": 64, 00:16:13.424 "state": "online", 00:16:13.424 "raid_level": "raid5f", 00:16:13.424 "superblock": false, 00:16:13.424 "num_base_bdevs": 4, 00:16:13.424 "num_base_bdevs_discovered": 4, 00:16:13.424 "num_base_bdevs_operational": 4, 00:16:13.424 "process": { 00:16:13.424 "type": "rebuild", 00:16:13.424 "target": "spare", 00:16:13.424 "progress": { 00:16:13.424 "blocks": 65280, 00:16:13.424 "percent": 33 00:16:13.424 } 00:16:13.424 }, 00:16:13.424 "base_bdevs_list": [ 00:16:13.424 { 00:16:13.424 "name": "spare", 00:16:13.424 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:13.424 "is_configured": true, 00:16:13.424 "data_offset": 0, 00:16:13.424 "data_size": 65536 00:16:13.424 }, 00:16:13.424 { 00:16:13.424 "name": "BaseBdev2", 00:16:13.424 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:13.424 "is_configured": true, 00:16:13.424 "data_offset": 0, 00:16:13.424 "data_size": 65536 00:16:13.424 }, 00:16:13.424 { 00:16:13.424 "name": "BaseBdev3", 00:16:13.424 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:13.424 "is_configured": true, 00:16:13.424 "data_offset": 0, 00:16:13.424 "data_size": 65536 00:16:13.424 }, 00:16:13.424 { 00:16:13.424 "name": "BaseBdev4", 00:16:13.424 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:13.424 "is_configured": true, 00:16:13.424 "data_offset": 0, 00:16:13.424 "data_size": 65536 00:16:13.424 } 00:16:13.424 ] 00:16:13.424 }' 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.424 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.425 21:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.803 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.803 "name": "raid_bdev1", 00:16:14.803 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:14.803 "strip_size_kb": 64, 00:16:14.803 "state": "online", 00:16:14.803 "raid_level": "raid5f", 00:16:14.803 "superblock": false, 00:16:14.803 "num_base_bdevs": 4, 00:16:14.803 "num_base_bdevs_discovered": 4, 00:16:14.803 "num_base_bdevs_operational": 4, 00:16:14.803 "process": { 00:16:14.803 "type": "rebuild", 00:16:14.803 "target": "spare", 00:16:14.803 "progress": { 00:16:14.803 "blocks": 86400, 00:16:14.803 "percent": 43 00:16:14.803 } 00:16:14.803 }, 00:16:14.803 "base_bdevs_list": [ 00:16:14.803 { 00:16:14.803 "name": "spare", 00:16:14.803 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:14.803 "is_configured": true, 00:16:14.803 "data_offset": 0, 00:16:14.803 "data_size": 65536 00:16:14.803 }, 00:16:14.803 { 00:16:14.803 "name": "BaseBdev2", 00:16:14.803 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:14.803 "is_configured": true, 00:16:14.803 "data_offset": 0, 00:16:14.803 "data_size": 65536 00:16:14.803 }, 00:16:14.803 { 00:16:14.803 "name": "BaseBdev3", 00:16:14.803 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:14.803 "is_configured": true, 00:16:14.803 "data_offset": 0, 00:16:14.804 "data_size": 65536 00:16:14.804 }, 00:16:14.804 { 00:16:14.804 "name": "BaseBdev4", 00:16:14.804 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:14.804 "is_configured": true, 00:16:14.804 "data_offset": 0, 00:16:14.804 "data_size": 65536 00:16:14.804 } 00:16:14.804 ] 00:16:14.804 }' 00:16:14.804 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.804 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.804 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.804 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.804 21:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.742 "name": "raid_bdev1", 00:16:15.742 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:15.742 "strip_size_kb": 64, 00:16:15.742 "state": "online", 00:16:15.742 "raid_level": "raid5f", 00:16:15.742 "superblock": false, 00:16:15.742 "num_base_bdevs": 4, 00:16:15.742 "num_base_bdevs_discovered": 4, 00:16:15.742 "num_base_bdevs_operational": 4, 00:16:15.742 "process": { 00:16:15.742 "type": "rebuild", 00:16:15.742 "target": "spare", 00:16:15.742 "progress": { 00:16:15.742 "blocks": 109440, 00:16:15.742 "percent": 55 00:16:15.742 } 00:16:15.742 }, 00:16:15.742 "base_bdevs_list": [ 00:16:15.742 { 00:16:15.742 "name": "spare", 00:16:15.742 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:15.742 "is_configured": true, 00:16:15.742 "data_offset": 0, 00:16:15.742 "data_size": 65536 00:16:15.742 }, 00:16:15.742 { 00:16:15.742 "name": "BaseBdev2", 00:16:15.742 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:15.742 "is_configured": true, 00:16:15.742 "data_offset": 0, 00:16:15.742 "data_size": 65536 00:16:15.742 }, 00:16:15.742 { 00:16:15.742 "name": "BaseBdev3", 00:16:15.742 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:15.742 "is_configured": true, 00:16:15.742 "data_offset": 0, 00:16:15.742 "data_size": 65536 00:16:15.742 }, 00:16:15.742 { 00:16:15.742 "name": "BaseBdev4", 00:16:15.742 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:15.742 "is_configured": true, 00:16:15.742 "data_offset": 0, 00:16:15.742 "data_size": 65536 00:16:15.742 } 00:16:15.742 ] 00:16:15.742 }' 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.742 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.743 21:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.122 "name": "raid_bdev1", 00:16:17.122 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:17.122 "strip_size_kb": 64, 00:16:17.122 "state": "online", 00:16:17.122 "raid_level": "raid5f", 00:16:17.122 "superblock": false, 00:16:17.122 "num_base_bdevs": 4, 00:16:17.122 "num_base_bdevs_discovered": 4, 00:16:17.122 "num_base_bdevs_operational": 4, 00:16:17.122 "process": { 00:16:17.122 "type": "rebuild", 00:16:17.122 "target": "spare", 00:16:17.122 "progress": { 00:16:17.122 "blocks": 130560, 00:16:17.122 "percent": 66 00:16:17.122 } 00:16:17.122 }, 00:16:17.122 "base_bdevs_list": [ 00:16:17.122 { 00:16:17.122 "name": "spare", 00:16:17.122 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:17.122 "is_configured": true, 00:16:17.122 "data_offset": 0, 00:16:17.122 "data_size": 65536 00:16:17.122 }, 00:16:17.122 { 00:16:17.122 "name": "BaseBdev2", 00:16:17.122 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:17.122 "is_configured": true, 00:16:17.122 "data_offset": 0, 00:16:17.122 "data_size": 65536 00:16:17.122 }, 00:16:17.122 { 00:16:17.122 "name": "BaseBdev3", 00:16:17.122 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:17.122 "is_configured": true, 00:16:17.122 "data_offset": 0, 00:16:17.122 "data_size": 65536 00:16:17.122 }, 00:16:17.122 { 00:16:17.122 "name": "BaseBdev4", 00:16:17.122 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:17.122 "is_configured": true, 00:16:17.122 "data_offset": 0, 00:16:17.122 "data_size": 65536 00:16:17.122 } 00:16:17.122 ] 00:16:17.122 }' 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.122 21:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.122 21:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.122 21:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.062 "name": "raid_bdev1", 00:16:18.062 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:18.062 "strip_size_kb": 64, 00:16:18.062 "state": "online", 00:16:18.062 "raid_level": "raid5f", 00:16:18.062 "superblock": false, 00:16:18.062 "num_base_bdevs": 4, 00:16:18.062 "num_base_bdevs_discovered": 4, 00:16:18.062 "num_base_bdevs_operational": 4, 00:16:18.062 "process": { 00:16:18.062 "type": "rebuild", 00:16:18.062 "target": "spare", 00:16:18.062 "progress": { 00:16:18.062 "blocks": 153600, 00:16:18.062 "percent": 78 00:16:18.062 } 00:16:18.062 }, 00:16:18.062 "base_bdevs_list": [ 00:16:18.062 { 00:16:18.062 "name": "spare", 00:16:18.062 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:18.062 "is_configured": true, 00:16:18.062 "data_offset": 0, 00:16:18.062 "data_size": 65536 00:16:18.062 }, 00:16:18.062 { 00:16:18.062 "name": "BaseBdev2", 00:16:18.062 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:18.062 "is_configured": true, 00:16:18.062 "data_offset": 0, 00:16:18.062 "data_size": 65536 00:16:18.062 }, 00:16:18.062 { 00:16:18.062 "name": "BaseBdev3", 00:16:18.062 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:18.062 "is_configured": true, 00:16:18.062 "data_offset": 0, 00:16:18.062 "data_size": 65536 00:16:18.062 }, 00:16:18.062 { 00:16:18.062 "name": "BaseBdev4", 00:16:18.062 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:18.062 "is_configured": true, 00:16:18.062 "data_offset": 0, 00:16:18.062 "data_size": 65536 00:16:18.062 } 00:16:18.062 ] 00:16:18.062 }' 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.062 21:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.453 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.453 "name": "raid_bdev1", 00:16:19.453 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:19.453 "strip_size_kb": 64, 00:16:19.453 "state": "online", 00:16:19.453 "raid_level": "raid5f", 00:16:19.453 "superblock": false, 00:16:19.454 "num_base_bdevs": 4, 00:16:19.454 "num_base_bdevs_discovered": 4, 00:16:19.454 "num_base_bdevs_operational": 4, 00:16:19.454 "process": { 00:16:19.454 "type": "rebuild", 00:16:19.454 "target": "spare", 00:16:19.454 "progress": { 00:16:19.454 "blocks": 174720, 00:16:19.454 "percent": 88 00:16:19.454 } 00:16:19.454 }, 00:16:19.454 "base_bdevs_list": [ 00:16:19.454 { 00:16:19.454 "name": "spare", 00:16:19.454 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:19.454 "is_configured": true, 00:16:19.454 "data_offset": 0, 00:16:19.454 "data_size": 65536 00:16:19.454 }, 00:16:19.454 { 00:16:19.454 "name": "BaseBdev2", 00:16:19.454 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:19.454 "is_configured": true, 00:16:19.454 "data_offset": 0, 00:16:19.454 "data_size": 65536 00:16:19.454 }, 00:16:19.454 { 00:16:19.454 "name": "BaseBdev3", 00:16:19.454 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:19.454 "is_configured": true, 00:16:19.454 "data_offset": 0, 00:16:19.454 "data_size": 65536 00:16:19.454 }, 00:16:19.454 { 00:16:19.454 "name": "BaseBdev4", 00:16:19.454 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:19.454 "is_configured": true, 00:16:19.454 "data_offset": 0, 00:16:19.454 "data_size": 65536 00:16:19.454 } 00:16:19.454 ] 00:16:19.454 }' 00:16:19.454 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.454 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.454 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.454 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.454 21:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.400 [2024-11-26 21:23:38.329140] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:20.400 [2024-11-26 21:23:38.329258] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:20.400 [2024-11-26 21:23:38.329326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.400 "name": "raid_bdev1", 00:16:20.400 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:20.400 "strip_size_kb": 64, 00:16:20.400 "state": "online", 00:16:20.400 "raid_level": "raid5f", 00:16:20.400 "superblock": false, 00:16:20.400 "num_base_bdevs": 4, 00:16:20.400 "num_base_bdevs_discovered": 4, 00:16:20.400 "num_base_bdevs_operational": 4, 00:16:20.400 "process": { 00:16:20.400 "type": "rebuild", 00:16:20.400 "target": "spare", 00:16:20.400 "progress": { 00:16:20.400 "blocks": 195840, 00:16:20.400 "percent": 99 00:16:20.400 } 00:16:20.400 }, 00:16:20.400 "base_bdevs_list": [ 00:16:20.400 { 00:16:20.400 "name": "spare", 00:16:20.400 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:20.400 "is_configured": true, 00:16:20.400 "data_offset": 0, 00:16:20.400 "data_size": 65536 00:16:20.400 }, 00:16:20.400 { 00:16:20.400 "name": "BaseBdev2", 00:16:20.400 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:20.400 "is_configured": true, 00:16:20.400 "data_offset": 0, 00:16:20.400 "data_size": 65536 00:16:20.400 }, 00:16:20.400 { 00:16:20.400 "name": "BaseBdev3", 00:16:20.400 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:20.400 "is_configured": true, 00:16:20.400 "data_offset": 0, 00:16:20.400 "data_size": 65536 00:16:20.400 }, 00:16:20.400 { 00:16:20.400 "name": "BaseBdev4", 00:16:20.400 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:20.400 "is_configured": true, 00:16:20.400 "data_offset": 0, 00:16:20.400 "data_size": 65536 00:16:20.400 } 00:16:20.400 ] 00:16:20.400 }' 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.400 21:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.348 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.606 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.606 "name": "raid_bdev1", 00:16:21.606 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:21.606 "strip_size_kb": 64, 00:16:21.606 "state": "online", 00:16:21.606 "raid_level": "raid5f", 00:16:21.606 "superblock": false, 00:16:21.606 "num_base_bdevs": 4, 00:16:21.606 "num_base_bdevs_discovered": 4, 00:16:21.606 "num_base_bdevs_operational": 4, 00:16:21.606 "base_bdevs_list": [ 00:16:21.606 { 00:16:21.606 "name": "spare", 00:16:21.606 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:21.606 "is_configured": true, 00:16:21.606 "data_offset": 0, 00:16:21.606 "data_size": 65536 00:16:21.606 }, 00:16:21.606 { 00:16:21.606 "name": "BaseBdev2", 00:16:21.606 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:21.606 "is_configured": true, 00:16:21.606 "data_offset": 0, 00:16:21.606 "data_size": 65536 00:16:21.606 }, 00:16:21.606 { 00:16:21.606 "name": "BaseBdev3", 00:16:21.606 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:21.607 "is_configured": true, 00:16:21.607 "data_offset": 0, 00:16:21.607 "data_size": 65536 00:16:21.607 }, 00:16:21.607 { 00:16:21.607 "name": "BaseBdev4", 00:16:21.607 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:21.607 "is_configured": true, 00:16:21.607 "data_offset": 0, 00:16:21.607 "data_size": 65536 00:16:21.607 } 00:16:21.607 ] 00:16:21.607 }' 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.607 "name": "raid_bdev1", 00:16:21.607 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:21.607 "strip_size_kb": 64, 00:16:21.607 "state": "online", 00:16:21.607 "raid_level": "raid5f", 00:16:21.607 "superblock": false, 00:16:21.607 "num_base_bdevs": 4, 00:16:21.607 "num_base_bdevs_discovered": 4, 00:16:21.607 "num_base_bdevs_operational": 4, 00:16:21.607 "base_bdevs_list": [ 00:16:21.607 { 00:16:21.607 "name": "spare", 00:16:21.607 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:21.607 "is_configured": true, 00:16:21.607 "data_offset": 0, 00:16:21.607 "data_size": 65536 00:16:21.607 }, 00:16:21.607 { 00:16:21.607 "name": "BaseBdev2", 00:16:21.607 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:21.607 "is_configured": true, 00:16:21.607 "data_offset": 0, 00:16:21.607 "data_size": 65536 00:16:21.607 }, 00:16:21.607 { 00:16:21.607 "name": "BaseBdev3", 00:16:21.607 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:21.607 "is_configured": true, 00:16:21.607 "data_offset": 0, 00:16:21.607 "data_size": 65536 00:16:21.607 }, 00:16:21.607 { 00:16:21.607 "name": "BaseBdev4", 00:16:21.607 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:21.607 "is_configured": true, 00:16:21.607 "data_offset": 0, 00:16:21.607 "data_size": 65536 00:16:21.607 } 00:16:21.607 ] 00:16:21.607 }' 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.607 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.867 "name": "raid_bdev1", 00:16:21.867 "uuid": "27d19e73-9876-44f3-aac0-034dac325059", 00:16:21.867 "strip_size_kb": 64, 00:16:21.867 "state": "online", 00:16:21.867 "raid_level": "raid5f", 00:16:21.867 "superblock": false, 00:16:21.867 "num_base_bdevs": 4, 00:16:21.867 "num_base_bdevs_discovered": 4, 00:16:21.867 "num_base_bdevs_operational": 4, 00:16:21.867 "base_bdevs_list": [ 00:16:21.867 { 00:16:21.867 "name": "spare", 00:16:21.867 "uuid": "ec568ff2-96fe-55fe-af43-2d08f9d83098", 00:16:21.867 "is_configured": true, 00:16:21.867 "data_offset": 0, 00:16:21.867 "data_size": 65536 00:16:21.867 }, 00:16:21.867 { 00:16:21.867 "name": "BaseBdev2", 00:16:21.867 "uuid": "fb12a85b-a0dd-5a96-bb26-42c499690de5", 00:16:21.867 "is_configured": true, 00:16:21.867 "data_offset": 0, 00:16:21.867 "data_size": 65536 00:16:21.867 }, 00:16:21.867 { 00:16:21.867 "name": "BaseBdev3", 00:16:21.867 "uuid": "f0004d05-6b61-57eb-964a-2725a07718ec", 00:16:21.867 "is_configured": true, 00:16:21.867 "data_offset": 0, 00:16:21.867 "data_size": 65536 00:16:21.867 }, 00:16:21.867 { 00:16:21.867 "name": "BaseBdev4", 00:16:21.867 "uuid": "883fe383-4af6-5b01-b12e-769506b457f3", 00:16:21.867 "is_configured": true, 00:16:21.867 "data_offset": 0, 00:16:21.867 "data_size": 65536 00:16:21.867 } 00:16:21.867 ] 00:16:21.867 }' 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.867 21:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.128 [2024-11-26 21:23:40.207153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.128 [2024-11-26 21:23:40.207236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.128 [2024-11-26 21:23:40.207350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.128 [2024-11-26 21:23:40.207485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.128 [2024-11-26 21:23:40.207549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.128 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:22.388 /dev/nbd0 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.388 1+0 records in 00:16:22.388 1+0 records out 00:16:22.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382633 s, 10.7 MB/s 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.388 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:22.649 /dev/nbd1 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.649 1+0 records in 00:16:22.649 1+0 records out 00:16:22.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583894 s, 7.0 MB/s 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.649 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.909 21:23:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.170 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84392 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84392 ']' 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84392 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84392 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84392' 00:16:23.430 killing process with pid 84392 00:16:23.430 Received shutdown signal, test time was about 60.000000 seconds 00:16:23.430 00:16:23.430 Latency(us) 00:16:23.430 [2024-11-26T21:23:41.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.430 [2024-11-26T21:23:41.586Z] =================================================================================================================== 00:16:23.430 [2024-11-26T21:23:41.586Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84392 00:16:23.430 [2024-11-26 21:23:41.430164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.430 21:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84392 00:16:24.001 [2024-11-26 21:23:41.942934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:25.385 00:16:25.385 real 0m20.333s 00:16:25.385 user 0m24.183s 00:16:25.385 sys 0m2.451s 00:16:25.385 ************************************ 00:16:25.385 END TEST raid5f_rebuild_test 00:16:25.385 ************************************ 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.385 21:23:43 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:25.385 21:23:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:25.385 21:23:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.385 21:23:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.385 ************************************ 00:16:25.385 START TEST raid5f_rebuild_test_sb 00:16:25.385 ************************************ 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84921 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84921 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84921 ']' 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.385 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.386 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.386 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.386 21:23:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.386 [2024-11-26 21:23:43.296293] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:25.386 [2024-11-26 21:23:43.296468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:25.386 Zero copy mechanism will not be used. 00:16:25.386 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84921 ] 00:16:25.386 [2024-11-26 21:23:43.470850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.646 [2024-11-26 21:23:43.599452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.906 [2024-11-26 21:23:43.831542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.906 [2024-11-26 21:23:43.831654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.166 BaseBdev1_malloc 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.166 [2024-11-26 21:23:44.176701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:26.166 [2024-11-26 21:23:44.176860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.166 [2024-11-26 21:23:44.176893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.166 [2024-11-26 21:23:44.176906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.166 [2024-11-26 21:23:44.179310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.166 [2024-11-26 21:23:44.179350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:26.166 BaseBdev1 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.166 BaseBdev2_malloc 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.166 [2024-11-26 21:23:44.236801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:26.166 [2024-11-26 21:23:44.236868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.166 [2024-11-26 21:23:44.236894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:26.166 [2024-11-26 21:23:44.236907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.166 [2024-11-26 21:23:44.239276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.166 [2024-11-26 21:23:44.239314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:26.166 BaseBdev2 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.166 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.426 BaseBdev3_malloc 00:16:26.426 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.426 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 [2024-11-26 21:23:44.330593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:26.427 [2024-11-26 21:23:44.330648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.427 [2024-11-26 21:23:44.330686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:26.427 [2024-11-26 21:23:44.330698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.427 [2024-11-26 21:23:44.333129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.427 [2024-11-26 21:23:44.333207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:26.427 BaseBdev3 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 BaseBdev4_malloc 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 [2024-11-26 21:23:44.391544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:26.427 [2024-11-26 21:23:44.391661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.427 [2024-11-26 21:23:44.391688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:26.427 [2024-11-26 21:23:44.391700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.427 [2024-11-26 21:23:44.394128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.427 [2024-11-26 21:23:44.394166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:26.427 BaseBdev4 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 spare_malloc 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 spare_delay 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 [2024-11-26 21:23:44.461328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:26.427 [2024-11-26 21:23:44.461381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.427 [2024-11-26 21:23:44.461414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:26.427 [2024-11-26 21:23:44.461425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.427 [2024-11-26 21:23:44.463723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.427 [2024-11-26 21:23:44.463819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:26.427 spare 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 [2024-11-26 21:23:44.473365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.427 [2024-11-26 21:23:44.475336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.427 [2024-11-26 21:23:44.475400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.427 [2024-11-26 21:23:44.475447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.427 [2024-11-26 21:23:44.475622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:26.427 [2024-11-26 21:23:44.475638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:26.427 [2024-11-26 21:23:44.475873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:26.427 [2024-11-26 21:23:44.483034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:26.427 [2024-11-26 21:23:44.483056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:26.427 [2024-11-26 21:23:44.483253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.427 "name": "raid_bdev1", 00:16:26.427 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:26.427 "strip_size_kb": 64, 00:16:26.427 "state": "online", 00:16:26.427 "raid_level": "raid5f", 00:16:26.427 "superblock": true, 00:16:26.427 "num_base_bdevs": 4, 00:16:26.427 "num_base_bdevs_discovered": 4, 00:16:26.427 "num_base_bdevs_operational": 4, 00:16:26.427 "base_bdevs_list": [ 00:16:26.427 { 00:16:26.427 "name": "BaseBdev1", 00:16:26.427 "uuid": "24f9ac24-b292-593f-b5e0-11a6e5074b5f", 00:16:26.427 "is_configured": true, 00:16:26.427 "data_offset": 2048, 00:16:26.427 "data_size": 63488 00:16:26.427 }, 00:16:26.427 { 00:16:26.427 "name": "BaseBdev2", 00:16:26.427 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:26.427 "is_configured": true, 00:16:26.427 "data_offset": 2048, 00:16:26.427 "data_size": 63488 00:16:26.427 }, 00:16:26.427 { 00:16:26.427 "name": "BaseBdev3", 00:16:26.427 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:26.427 "is_configured": true, 00:16:26.427 "data_offset": 2048, 00:16:26.427 "data_size": 63488 00:16:26.427 }, 00:16:26.427 { 00:16:26.427 "name": "BaseBdev4", 00:16:26.427 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:26.427 "is_configured": true, 00:16:26.427 "data_offset": 2048, 00:16:26.427 "data_size": 63488 00:16:26.427 } 00:16:26.427 ] 00:16:26.427 }' 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.427 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 [2024-11-26 21:23:44.903624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.997 21:23:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:27.258 [2024-11-26 21:23:45.155071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:27.258 /dev/nbd0 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.258 1+0 records in 00:16:27.258 1+0 records out 00:16:27.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238872 s, 17.1 MB/s 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:27.258 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:27.827 496+0 records in 00:16:27.827 496+0 records out 00:16:27.827 97517568 bytes (98 MB, 93 MiB) copied, 0.470031 s, 207 MB/s 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.827 [2024-11-26 21:23:45.885799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.827 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.828 [2024-11-26 21:23:45.919439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.828 "name": "raid_bdev1", 00:16:27.828 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:27.828 "strip_size_kb": 64, 00:16:27.828 "state": "online", 00:16:27.828 "raid_level": "raid5f", 00:16:27.828 "superblock": true, 00:16:27.828 "num_base_bdevs": 4, 00:16:27.828 "num_base_bdevs_discovered": 3, 00:16:27.828 "num_base_bdevs_operational": 3, 00:16:27.828 "base_bdevs_list": [ 00:16:27.828 { 00:16:27.828 "name": null, 00:16:27.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.828 "is_configured": false, 00:16:27.828 "data_offset": 0, 00:16:27.828 "data_size": 63488 00:16:27.828 }, 00:16:27.828 { 00:16:27.828 "name": "BaseBdev2", 00:16:27.828 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:27.828 "is_configured": true, 00:16:27.828 "data_offset": 2048, 00:16:27.828 "data_size": 63488 00:16:27.828 }, 00:16:27.828 { 00:16:27.828 "name": "BaseBdev3", 00:16:27.828 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:27.828 "is_configured": true, 00:16:27.828 "data_offset": 2048, 00:16:27.828 "data_size": 63488 00:16:27.828 }, 00:16:27.828 { 00:16:27.828 "name": "BaseBdev4", 00:16:27.828 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:27.828 "is_configured": true, 00:16:27.828 "data_offset": 2048, 00:16:27.828 "data_size": 63488 00:16:27.828 } 00:16:27.828 ] 00:16:27.828 }' 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.828 21:23:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.397 21:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.397 21:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.397 21:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.397 [2024-11-26 21:23:46.358651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.397 [2024-11-26 21:23:46.374086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:28.397 21:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.397 21:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.397 [2024-11-26 21:23:46.383445] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.337 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.337 "name": "raid_bdev1", 00:16:29.337 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:29.337 "strip_size_kb": 64, 00:16:29.337 "state": "online", 00:16:29.337 "raid_level": "raid5f", 00:16:29.337 "superblock": true, 00:16:29.337 "num_base_bdevs": 4, 00:16:29.337 "num_base_bdevs_discovered": 4, 00:16:29.337 "num_base_bdevs_operational": 4, 00:16:29.337 "process": { 00:16:29.337 "type": "rebuild", 00:16:29.337 "target": "spare", 00:16:29.337 "progress": { 00:16:29.337 "blocks": 19200, 00:16:29.337 "percent": 10 00:16:29.337 } 00:16:29.337 }, 00:16:29.337 "base_bdevs_list": [ 00:16:29.337 { 00:16:29.337 "name": "spare", 00:16:29.337 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:29.337 "is_configured": true, 00:16:29.337 "data_offset": 2048, 00:16:29.337 "data_size": 63488 00:16:29.337 }, 00:16:29.337 { 00:16:29.337 "name": "BaseBdev2", 00:16:29.337 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:29.337 "is_configured": true, 00:16:29.337 "data_offset": 2048, 00:16:29.337 "data_size": 63488 00:16:29.337 }, 00:16:29.337 { 00:16:29.338 "name": "BaseBdev3", 00:16:29.338 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:29.338 "is_configured": true, 00:16:29.338 "data_offset": 2048, 00:16:29.338 "data_size": 63488 00:16:29.338 }, 00:16:29.338 { 00:16:29.338 "name": "BaseBdev4", 00:16:29.338 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:29.338 "is_configured": true, 00:16:29.338 "data_offset": 2048, 00:16:29.338 "data_size": 63488 00:16:29.338 } 00:16:29.338 ] 00:16:29.338 }' 00:16:29.338 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.338 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.338 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.600 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.601 [2024-11-26 21:23:47.518574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.601 [2024-11-26 21:23:47.590617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.601 [2024-11-26 21:23:47.590688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.601 [2024-11-26 21:23:47.590708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.601 [2024-11-26 21:23:47.590719] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.601 "name": "raid_bdev1", 00:16:29.601 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:29.601 "strip_size_kb": 64, 00:16:29.601 "state": "online", 00:16:29.601 "raid_level": "raid5f", 00:16:29.601 "superblock": true, 00:16:29.601 "num_base_bdevs": 4, 00:16:29.601 "num_base_bdevs_discovered": 3, 00:16:29.601 "num_base_bdevs_operational": 3, 00:16:29.601 "base_bdevs_list": [ 00:16:29.601 { 00:16:29.601 "name": null, 00:16:29.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.601 "is_configured": false, 00:16:29.601 "data_offset": 0, 00:16:29.601 "data_size": 63488 00:16:29.601 }, 00:16:29.601 { 00:16:29.601 "name": "BaseBdev2", 00:16:29.601 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:29.601 "is_configured": true, 00:16:29.601 "data_offset": 2048, 00:16:29.601 "data_size": 63488 00:16:29.601 }, 00:16:29.601 { 00:16:29.601 "name": "BaseBdev3", 00:16:29.601 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:29.601 "is_configured": true, 00:16:29.601 "data_offset": 2048, 00:16:29.601 "data_size": 63488 00:16:29.601 }, 00:16:29.601 { 00:16:29.601 "name": "BaseBdev4", 00:16:29.601 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:29.601 "is_configured": true, 00:16:29.601 "data_offset": 2048, 00:16:29.601 "data_size": 63488 00:16:29.601 } 00:16:29.601 ] 00:16:29.601 }' 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.601 21:23:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.860 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.860 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.120 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.120 "name": "raid_bdev1", 00:16:30.120 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:30.120 "strip_size_kb": 64, 00:16:30.120 "state": "online", 00:16:30.120 "raid_level": "raid5f", 00:16:30.120 "superblock": true, 00:16:30.120 "num_base_bdevs": 4, 00:16:30.120 "num_base_bdevs_discovered": 3, 00:16:30.120 "num_base_bdevs_operational": 3, 00:16:30.120 "base_bdevs_list": [ 00:16:30.120 { 00:16:30.120 "name": null, 00:16:30.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.120 "is_configured": false, 00:16:30.120 "data_offset": 0, 00:16:30.120 "data_size": 63488 00:16:30.120 }, 00:16:30.121 { 00:16:30.121 "name": "BaseBdev2", 00:16:30.121 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:30.121 "is_configured": true, 00:16:30.121 "data_offset": 2048, 00:16:30.121 "data_size": 63488 00:16:30.121 }, 00:16:30.121 { 00:16:30.121 "name": "BaseBdev3", 00:16:30.121 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:30.121 "is_configured": true, 00:16:30.121 "data_offset": 2048, 00:16:30.121 "data_size": 63488 00:16:30.121 }, 00:16:30.121 { 00:16:30.121 "name": "BaseBdev4", 00:16:30.121 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:30.121 "is_configured": true, 00:16:30.121 "data_offset": 2048, 00:16:30.121 "data_size": 63488 00:16:30.121 } 00:16:30.121 ] 00:16:30.121 }' 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.121 [2024-11-26 21:23:48.159128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.121 [2024-11-26 21:23:48.172992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.121 21:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:30.121 [2024-11-26 21:23:48.181810] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.061 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.320 "name": "raid_bdev1", 00:16:31.320 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:31.320 "strip_size_kb": 64, 00:16:31.320 "state": "online", 00:16:31.320 "raid_level": "raid5f", 00:16:31.320 "superblock": true, 00:16:31.320 "num_base_bdevs": 4, 00:16:31.320 "num_base_bdevs_discovered": 4, 00:16:31.320 "num_base_bdevs_operational": 4, 00:16:31.320 "process": { 00:16:31.320 "type": "rebuild", 00:16:31.320 "target": "spare", 00:16:31.320 "progress": { 00:16:31.320 "blocks": 19200, 00:16:31.320 "percent": 10 00:16:31.320 } 00:16:31.320 }, 00:16:31.320 "base_bdevs_list": [ 00:16:31.320 { 00:16:31.320 "name": "spare", 00:16:31.320 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev2", 00:16:31.320 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev3", 00:16:31.320 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev4", 00:16:31.320 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 } 00:16:31.320 ] 00:16:31.320 }' 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:31.320 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:31.320 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=631 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.321 "name": "raid_bdev1", 00:16:31.321 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:31.321 "strip_size_kb": 64, 00:16:31.321 "state": "online", 00:16:31.321 "raid_level": "raid5f", 00:16:31.321 "superblock": true, 00:16:31.321 "num_base_bdevs": 4, 00:16:31.321 "num_base_bdevs_discovered": 4, 00:16:31.321 "num_base_bdevs_operational": 4, 00:16:31.321 "process": { 00:16:31.321 "type": "rebuild", 00:16:31.321 "target": "spare", 00:16:31.321 "progress": { 00:16:31.321 "blocks": 21120, 00:16:31.321 "percent": 11 00:16:31.321 } 00:16:31.321 }, 00:16:31.321 "base_bdevs_list": [ 00:16:31.321 { 00:16:31.321 "name": "spare", 00:16:31.321 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:31.321 "is_configured": true, 00:16:31.321 "data_offset": 2048, 00:16:31.321 "data_size": 63488 00:16:31.321 }, 00:16:31.321 { 00:16:31.321 "name": "BaseBdev2", 00:16:31.321 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:31.321 "is_configured": true, 00:16:31.321 "data_offset": 2048, 00:16:31.321 "data_size": 63488 00:16:31.321 }, 00:16:31.321 { 00:16:31.321 "name": "BaseBdev3", 00:16:31.321 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:31.321 "is_configured": true, 00:16:31.321 "data_offset": 2048, 00:16:31.321 "data_size": 63488 00:16:31.321 }, 00:16:31.321 { 00:16:31.321 "name": "BaseBdev4", 00:16:31.321 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:31.321 "is_configured": true, 00:16:31.321 "data_offset": 2048, 00:16:31.321 "data_size": 63488 00:16:31.321 } 00:16:31.321 ] 00:16:31.321 }' 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.321 21:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.703 "name": "raid_bdev1", 00:16:32.703 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:32.703 "strip_size_kb": 64, 00:16:32.703 "state": "online", 00:16:32.703 "raid_level": "raid5f", 00:16:32.703 "superblock": true, 00:16:32.703 "num_base_bdevs": 4, 00:16:32.703 "num_base_bdevs_discovered": 4, 00:16:32.703 "num_base_bdevs_operational": 4, 00:16:32.703 "process": { 00:16:32.703 "type": "rebuild", 00:16:32.703 "target": "spare", 00:16:32.703 "progress": { 00:16:32.703 "blocks": 42240, 00:16:32.703 "percent": 22 00:16:32.703 } 00:16:32.703 }, 00:16:32.703 "base_bdevs_list": [ 00:16:32.703 { 00:16:32.703 "name": "spare", 00:16:32.703 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 }, 00:16:32.703 { 00:16:32.703 "name": "BaseBdev2", 00:16:32.703 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 }, 00:16:32.703 { 00:16:32.703 "name": "BaseBdev3", 00:16:32.703 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 }, 00:16:32.703 { 00:16:32.703 "name": "BaseBdev4", 00:16:32.703 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 } 00:16:32.703 ] 00:16:32.703 }' 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.703 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.704 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.704 21:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.643 "name": "raid_bdev1", 00:16:33.643 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:33.643 "strip_size_kb": 64, 00:16:33.643 "state": "online", 00:16:33.643 "raid_level": "raid5f", 00:16:33.643 "superblock": true, 00:16:33.643 "num_base_bdevs": 4, 00:16:33.643 "num_base_bdevs_discovered": 4, 00:16:33.643 "num_base_bdevs_operational": 4, 00:16:33.643 "process": { 00:16:33.643 "type": "rebuild", 00:16:33.643 "target": "spare", 00:16:33.643 "progress": { 00:16:33.643 "blocks": 65280, 00:16:33.643 "percent": 34 00:16:33.643 } 00:16:33.643 }, 00:16:33.643 "base_bdevs_list": [ 00:16:33.643 { 00:16:33.643 "name": "spare", 00:16:33.643 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:33.643 "is_configured": true, 00:16:33.643 "data_offset": 2048, 00:16:33.643 "data_size": 63488 00:16:33.643 }, 00:16:33.643 { 00:16:33.643 "name": "BaseBdev2", 00:16:33.643 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:33.643 "is_configured": true, 00:16:33.643 "data_offset": 2048, 00:16:33.643 "data_size": 63488 00:16:33.643 }, 00:16:33.643 { 00:16:33.643 "name": "BaseBdev3", 00:16:33.643 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:33.643 "is_configured": true, 00:16:33.643 "data_offset": 2048, 00:16:33.643 "data_size": 63488 00:16:33.643 }, 00:16:33.643 { 00:16:33.643 "name": "BaseBdev4", 00:16:33.643 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:33.643 "is_configured": true, 00:16:33.643 "data_offset": 2048, 00:16:33.643 "data_size": 63488 00:16:33.643 } 00:16:33.643 ] 00:16:33.643 }' 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.643 21:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.024 "name": "raid_bdev1", 00:16:35.024 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:35.024 "strip_size_kb": 64, 00:16:35.024 "state": "online", 00:16:35.024 "raid_level": "raid5f", 00:16:35.024 "superblock": true, 00:16:35.024 "num_base_bdevs": 4, 00:16:35.024 "num_base_bdevs_discovered": 4, 00:16:35.024 "num_base_bdevs_operational": 4, 00:16:35.024 "process": { 00:16:35.024 "type": "rebuild", 00:16:35.024 "target": "spare", 00:16:35.024 "progress": { 00:16:35.024 "blocks": 86400, 00:16:35.024 "percent": 45 00:16:35.024 } 00:16:35.024 }, 00:16:35.024 "base_bdevs_list": [ 00:16:35.024 { 00:16:35.024 "name": "spare", 00:16:35.024 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:35.024 "is_configured": true, 00:16:35.024 "data_offset": 2048, 00:16:35.024 "data_size": 63488 00:16:35.024 }, 00:16:35.024 { 00:16:35.024 "name": "BaseBdev2", 00:16:35.024 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:35.024 "is_configured": true, 00:16:35.024 "data_offset": 2048, 00:16:35.024 "data_size": 63488 00:16:35.024 }, 00:16:35.024 { 00:16:35.024 "name": "BaseBdev3", 00:16:35.024 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:35.024 "is_configured": true, 00:16:35.024 "data_offset": 2048, 00:16:35.024 "data_size": 63488 00:16:35.024 }, 00:16:35.024 { 00:16:35.024 "name": "BaseBdev4", 00:16:35.024 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:35.024 "is_configured": true, 00:16:35.024 "data_offset": 2048, 00:16:35.024 "data_size": 63488 00:16:35.024 } 00:16:35.024 ] 00:16:35.024 }' 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.024 21:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.964 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.965 "name": "raid_bdev1", 00:16:35.965 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:35.965 "strip_size_kb": 64, 00:16:35.965 "state": "online", 00:16:35.965 "raid_level": "raid5f", 00:16:35.965 "superblock": true, 00:16:35.965 "num_base_bdevs": 4, 00:16:35.965 "num_base_bdevs_discovered": 4, 00:16:35.965 "num_base_bdevs_operational": 4, 00:16:35.965 "process": { 00:16:35.965 "type": "rebuild", 00:16:35.965 "target": "spare", 00:16:35.965 "progress": { 00:16:35.965 "blocks": 109440, 00:16:35.965 "percent": 57 00:16:35.965 } 00:16:35.965 }, 00:16:35.965 "base_bdevs_list": [ 00:16:35.965 { 00:16:35.965 "name": "spare", 00:16:35.965 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:35.965 "is_configured": true, 00:16:35.965 "data_offset": 2048, 00:16:35.965 "data_size": 63488 00:16:35.965 }, 00:16:35.965 { 00:16:35.965 "name": "BaseBdev2", 00:16:35.965 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:35.965 "is_configured": true, 00:16:35.965 "data_offset": 2048, 00:16:35.965 "data_size": 63488 00:16:35.965 }, 00:16:35.965 { 00:16:35.965 "name": "BaseBdev3", 00:16:35.965 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:35.965 "is_configured": true, 00:16:35.965 "data_offset": 2048, 00:16:35.965 "data_size": 63488 00:16:35.965 }, 00:16:35.965 { 00:16:35.965 "name": "BaseBdev4", 00:16:35.965 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:35.965 "is_configured": true, 00:16:35.965 "data_offset": 2048, 00:16:35.965 "data_size": 63488 00:16:35.965 } 00:16:35.965 ] 00:16:35.965 }' 00:16:35.965 21:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.965 21:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.965 21:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.965 21:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.965 21:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.346 "name": "raid_bdev1", 00:16:37.346 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:37.346 "strip_size_kb": 64, 00:16:37.346 "state": "online", 00:16:37.346 "raid_level": "raid5f", 00:16:37.346 "superblock": true, 00:16:37.346 "num_base_bdevs": 4, 00:16:37.346 "num_base_bdevs_discovered": 4, 00:16:37.346 "num_base_bdevs_operational": 4, 00:16:37.346 "process": { 00:16:37.346 "type": "rebuild", 00:16:37.346 "target": "spare", 00:16:37.346 "progress": { 00:16:37.346 "blocks": 130560, 00:16:37.346 "percent": 68 00:16:37.346 } 00:16:37.346 }, 00:16:37.346 "base_bdevs_list": [ 00:16:37.346 { 00:16:37.346 "name": "spare", 00:16:37.346 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:37.346 "is_configured": true, 00:16:37.346 "data_offset": 2048, 00:16:37.346 "data_size": 63488 00:16:37.346 }, 00:16:37.346 { 00:16:37.346 "name": "BaseBdev2", 00:16:37.346 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:37.346 "is_configured": true, 00:16:37.346 "data_offset": 2048, 00:16:37.346 "data_size": 63488 00:16:37.346 }, 00:16:37.346 { 00:16:37.346 "name": "BaseBdev3", 00:16:37.346 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:37.346 "is_configured": true, 00:16:37.346 "data_offset": 2048, 00:16:37.346 "data_size": 63488 00:16:37.346 }, 00:16:37.346 { 00:16:37.346 "name": "BaseBdev4", 00:16:37.346 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:37.346 "is_configured": true, 00:16:37.346 "data_offset": 2048, 00:16:37.346 "data_size": 63488 00:16:37.346 } 00:16:37.346 ] 00:16:37.346 }' 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.346 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.347 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.347 21:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.314 "name": "raid_bdev1", 00:16:38.314 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:38.314 "strip_size_kb": 64, 00:16:38.314 "state": "online", 00:16:38.314 "raid_level": "raid5f", 00:16:38.314 "superblock": true, 00:16:38.314 "num_base_bdevs": 4, 00:16:38.314 "num_base_bdevs_discovered": 4, 00:16:38.314 "num_base_bdevs_operational": 4, 00:16:38.314 "process": { 00:16:38.314 "type": "rebuild", 00:16:38.314 "target": "spare", 00:16:38.314 "progress": { 00:16:38.314 "blocks": 151680, 00:16:38.314 "percent": 79 00:16:38.314 } 00:16:38.314 }, 00:16:38.314 "base_bdevs_list": [ 00:16:38.314 { 00:16:38.314 "name": "spare", 00:16:38.314 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:38.314 "is_configured": true, 00:16:38.314 "data_offset": 2048, 00:16:38.314 "data_size": 63488 00:16:38.314 }, 00:16:38.314 { 00:16:38.314 "name": "BaseBdev2", 00:16:38.314 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:38.314 "is_configured": true, 00:16:38.314 "data_offset": 2048, 00:16:38.314 "data_size": 63488 00:16:38.314 }, 00:16:38.314 { 00:16:38.314 "name": "BaseBdev3", 00:16:38.314 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:38.314 "is_configured": true, 00:16:38.314 "data_offset": 2048, 00:16:38.314 "data_size": 63488 00:16:38.314 }, 00:16:38.314 { 00:16:38.314 "name": "BaseBdev4", 00:16:38.314 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:38.314 "is_configured": true, 00:16:38.314 "data_offset": 2048, 00:16:38.314 "data_size": 63488 00:16:38.314 } 00:16:38.314 ] 00:16:38.314 }' 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.314 21:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.253 "name": "raid_bdev1", 00:16:39.253 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:39.253 "strip_size_kb": 64, 00:16:39.253 "state": "online", 00:16:39.253 "raid_level": "raid5f", 00:16:39.253 "superblock": true, 00:16:39.253 "num_base_bdevs": 4, 00:16:39.253 "num_base_bdevs_discovered": 4, 00:16:39.253 "num_base_bdevs_operational": 4, 00:16:39.253 "process": { 00:16:39.253 "type": "rebuild", 00:16:39.253 "target": "spare", 00:16:39.253 "progress": { 00:16:39.253 "blocks": 174720, 00:16:39.253 "percent": 91 00:16:39.253 } 00:16:39.253 }, 00:16:39.253 "base_bdevs_list": [ 00:16:39.253 { 00:16:39.253 "name": "spare", 00:16:39.253 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:39.253 "is_configured": true, 00:16:39.253 "data_offset": 2048, 00:16:39.253 "data_size": 63488 00:16:39.253 }, 00:16:39.253 { 00:16:39.253 "name": "BaseBdev2", 00:16:39.253 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:39.253 "is_configured": true, 00:16:39.253 "data_offset": 2048, 00:16:39.253 "data_size": 63488 00:16:39.253 }, 00:16:39.253 { 00:16:39.253 "name": "BaseBdev3", 00:16:39.253 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:39.253 "is_configured": true, 00:16:39.253 "data_offset": 2048, 00:16:39.253 "data_size": 63488 00:16:39.253 }, 00:16:39.253 { 00:16:39.253 "name": "BaseBdev4", 00:16:39.253 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:39.253 "is_configured": true, 00:16:39.253 "data_offset": 2048, 00:16:39.253 "data_size": 63488 00:16:39.253 } 00:16:39.253 ] 00:16:39.253 }' 00:16:39.253 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.512 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.512 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.512 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.512 21:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.081 [2024-11-26 21:23:58.232254] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.081 [2024-11-26 21:23:58.232378] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.081 [2024-11-26 21:23:58.232535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.340 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.600 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.600 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.600 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.600 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.600 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.600 "name": "raid_bdev1", 00:16:40.600 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:40.600 "strip_size_kb": 64, 00:16:40.600 "state": "online", 00:16:40.600 "raid_level": "raid5f", 00:16:40.600 "superblock": true, 00:16:40.600 "num_base_bdevs": 4, 00:16:40.600 "num_base_bdevs_discovered": 4, 00:16:40.600 "num_base_bdevs_operational": 4, 00:16:40.600 "base_bdevs_list": [ 00:16:40.600 { 00:16:40.600 "name": "spare", 00:16:40.600 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:40.600 "is_configured": true, 00:16:40.600 "data_offset": 2048, 00:16:40.600 "data_size": 63488 00:16:40.600 }, 00:16:40.600 { 00:16:40.600 "name": "BaseBdev2", 00:16:40.600 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:40.600 "is_configured": true, 00:16:40.600 "data_offset": 2048, 00:16:40.600 "data_size": 63488 00:16:40.600 }, 00:16:40.600 { 00:16:40.600 "name": "BaseBdev3", 00:16:40.601 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:40.601 "is_configured": true, 00:16:40.601 "data_offset": 2048, 00:16:40.601 "data_size": 63488 00:16:40.601 }, 00:16:40.601 { 00:16:40.601 "name": "BaseBdev4", 00:16:40.601 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:40.601 "is_configured": true, 00:16:40.601 "data_offset": 2048, 00:16:40.601 "data_size": 63488 00:16:40.601 } 00:16:40.601 ] 00:16:40.601 }' 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.601 "name": "raid_bdev1", 00:16:40.601 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:40.601 "strip_size_kb": 64, 00:16:40.601 "state": "online", 00:16:40.601 "raid_level": "raid5f", 00:16:40.601 "superblock": true, 00:16:40.601 "num_base_bdevs": 4, 00:16:40.601 "num_base_bdevs_discovered": 4, 00:16:40.601 "num_base_bdevs_operational": 4, 00:16:40.601 "base_bdevs_list": [ 00:16:40.601 { 00:16:40.601 "name": "spare", 00:16:40.601 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:40.601 "is_configured": true, 00:16:40.601 "data_offset": 2048, 00:16:40.601 "data_size": 63488 00:16:40.601 }, 00:16:40.601 { 00:16:40.601 "name": "BaseBdev2", 00:16:40.601 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:40.601 "is_configured": true, 00:16:40.601 "data_offset": 2048, 00:16:40.601 "data_size": 63488 00:16:40.601 }, 00:16:40.601 { 00:16:40.601 "name": "BaseBdev3", 00:16:40.601 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:40.601 "is_configured": true, 00:16:40.601 "data_offset": 2048, 00:16:40.601 "data_size": 63488 00:16:40.601 }, 00:16:40.601 { 00:16:40.601 "name": "BaseBdev4", 00:16:40.601 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:40.601 "is_configured": true, 00:16:40.601 "data_offset": 2048, 00:16:40.601 "data_size": 63488 00:16:40.601 } 00:16:40.601 ] 00:16:40.601 }' 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.601 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.860 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.860 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:40.860 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.860 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.860 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.860 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.861 "name": "raid_bdev1", 00:16:40.861 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:40.861 "strip_size_kb": 64, 00:16:40.861 "state": "online", 00:16:40.861 "raid_level": "raid5f", 00:16:40.861 "superblock": true, 00:16:40.861 "num_base_bdevs": 4, 00:16:40.861 "num_base_bdevs_discovered": 4, 00:16:40.861 "num_base_bdevs_operational": 4, 00:16:40.861 "base_bdevs_list": [ 00:16:40.861 { 00:16:40.861 "name": "spare", 00:16:40.861 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:40.861 "is_configured": true, 00:16:40.861 "data_offset": 2048, 00:16:40.861 "data_size": 63488 00:16:40.861 }, 00:16:40.861 { 00:16:40.861 "name": "BaseBdev2", 00:16:40.861 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:40.861 "is_configured": true, 00:16:40.861 "data_offset": 2048, 00:16:40.861 "data_size": 63488 00:16:40.861 }, 00:16:40.861 { 00:16:40.861 "name": "BaseBdev3", 00:16:40.861 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:40.861 "is_configured": true, 00:16:40.861 "data_offset": 2048, 00:16:40.861 "data_size": 63488 00:16:40.861 }, 00:16:40.861 { 00:16:40.861 "name": "BaseBdev4", 00:16:40.861 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:40.861 "is_configured": true, 00:16:40.861 "data_offset": 2048, 00:16:40.861 "data_size": 63488 00:16:40.861 } 00:16:40.861 ] 00:16:40.861 }' 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.861 21:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.120 [2024-11-26 21:23:59.168006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.120 [2024-11-26 21:23:59.168079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.120 [2024-11-26 21:23:59.168203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.120 [2024-11-26 21:23:59.168322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.120 [2024-11-26 21:23:59.168387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.120 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:41.379 /dev/nbd0 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.379 1+0 records in 00:16:41.379 1+0 records out 00:16:41.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541849 s, 7.6 MB/s 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.379 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:41.380 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.380 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.380 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:41.380 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.380 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.380 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:41.639 /dev/nbd1 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.639 1+0 records in 00:16:41.639 1+0 records out 00:16:41.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314139 s, 13.0 MB/s 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.639 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.900 21:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.159 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:42.160 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 [2024-11-26 21:24:00.339661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.419 [2024-11-26 21:24:00.339725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.419 [2024-11-26 21:24:00.339767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:42.419 [2024-11-26 21:24:00.339777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.419 [2024-11-26 21:24:00.342415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.419 [2024-11-26 21:24:00.342453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.419 [2024-11-26 21:24:00.342554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:42.419 [2024-11-26 21:24:00.342611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.419 [2024-11-26 21:24:00.342761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.419 [2024-11-26 21:24:00.342852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.419 [2024-11-26 21:24:00.342942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.419 spare 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 [2024-11-26 21:24:00.442884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:42.419 [2024-11-26 21:24:00.442916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:42.419 [2024-11-26 21:24:00.443195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:42.419 [2024-11-26 21:24:00.449796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:42.419 [2024-11-26 21:24:00.449818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:42.419 [2024-11-26 21:24:00.450011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.419 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.420 "name": "raid_bdev1", 00:16:42.420 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:42.420 "strip_size_kb": 64, 00:16:42.420 "state": "online", 00:16:42.420 "raid_level": "raid5f", 00:16:42.420 "superblock": true, 00:16:42.420 "num_base_bdevs": 4, 00:16:42.420 "num_base_bdevs_discovered": 4, 00:16:42.420 "num_base_bdevs_operational": 4, 00:16:42.420 "base_bdevs_list": [ 00:16:42.420 { 00:16:42.420 "name": "spare", 00:16:42.420 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:42.420 "is_configured": true, 00:16:42.420 "data_offset": 2048, 00:16:42.420 "data_size": 63488 00:16:42.420 }, 00:16:42.420 { 00:16:42.420 "name": "BaseBdev2", 00:16:42.420 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:42.420 "is_configured": true, 00:16:42.420 "data_offset": 2048, 00:16:42.420 "data_size": 63488 00:16:42.420 }, 00:16:42.420 { 00:16:42.420 "name": "BaseBdev3", 00:16:42.420 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:42.420 "is_configured": true, 00:16:42.420 "data_offset": 2048, 00:16:42.420 "data_size": 63488 00:16:42.420 }, 00:16:42.420 { 00:16:42.420 "name": "BaseBdev4", 00:16:42.420 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:42.420 "is_configured": true, 00:16:42.420 "data_offset": 2048, 00:16:42.420 "data_size": 63488 00:16:42.420 } 00:16:42.420 ] 00:16:42.420 }' 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.420 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.989 "name": "raid_bdev1", 00:16:42.989 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:42.989 "strip_size_kb": 64, 00:16:42.989 "state": "online", 00:16:42.989 "raid_level": "raid5f", 00:16:42.989 "superblock": true, 00:16:42.989 "num_base_bdevs": 4, 00:16:42.989 "num_base_bdevs_discovered": 4, 00:16:42.989 "num_base_bdevs_operational": 4, 00:16:42.989 "base_bdevs_list": [ 00:16:42.989 { 00:16:42.989 "name": "spare", 00:16:42.989 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "name": "BaseBdev2", 00:16:42.989 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "name": "BaseBdev3", 00:16:42.989 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "name": "BaseBdev4", 00:16:42.989 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 } 00:16:42.989 ] 00:16:42.989 }' 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.989 21:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.989 [2024-11-26 21:24:01.069988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.989 "name": "raid_bdev1", 00:16:42.989 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:42.989 "strip_size_kb": 64, 00:16:42.989 "state": "online", 00:16:42.989 "raid_level": "raid5f", 00:16:42.989 "superblock": true, 00:16:42.989 "num_base_bdevs": 4, 00:16:42.989 "num_base_bdevs_discovered": 3, 00:16:42.989 "num_base_bdevs_operational": 3, 00:16:42.989 "base_bdevs_list": [ 00:16:42.989 { 00:16:42.989 "name": null, 00:16:42.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.989 "is_configured": false, 00:16:42.989 "data_offset": 0, 00:16:42.989 "data_size": 63488 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "name": "BaseBdev2", 00:16:42.989 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "name": "BaseBdev3", 00:16:42.989 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 }, 00:16:42.989 { 00:16:42.989 "name": "BaseBdev4", 00:16:42.989 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:42.989 "is_configured": true, 00:16:42.989 "data_offset": 2048, 00:16:42.989 "data_size": 63488 00:16:42.989 } 00:16:42.989 ] 00:16:42.989 }' 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.989 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.558 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.558 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 [2024-11-26 21:24:01.489333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.558 [2024-11-26 21:24:01.489538] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.558 [2024-11-26 21:24:01.489567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.558 [2024-11-26 21:24:01.489602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.558 [2024-11-26 21:24:01.503947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:43.558 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.558 21:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:43.558 [2024-11-26 21:24:01.512683] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.497 "name": "raid_bdev1", 00:16:44.497 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:44.497 "strip_size_kb": 64, 00:16:44.497 "state": "online", 00:16:44.497 "raid_level": "raid5f", 00:16:44.497 "superblock": true, 00:16:44.497 "num_base_bdevs": 4, 00:16:44.497 "num_base_bdevs_discovered": 4, 00:16:44.497 "num_base_bdevs_operational": 4, 00:16:44.497 "process": { 00:16:44.497 "type": "rebuild", 00:16:44.497 "target": "spare", 00:16:44.497 "progress": { 00:16:44.497 "blocks": 19200, 00:16:44.497 "percent": 10 00:16:44.497 } 00:16:44.497 }, 00:16:44.497 "base_bdevs_list": [ 00:16:44.497 { 00:16:44.497 "name": "spare", 00:16:44.497 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:44.497 "is_configured": true, 00:16:44.497 "data_offset": 2048, 00:16:44.497 "data_size": 63488 00:16:44.497 }, 00:16:44.497 { 00:16:44.497 "name": "BaseBdev2", 00:16:44.497 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:44.497 "is_configured": true, 00:16:44.497 "data_offset": 2048, 00:16:44.497 "data_size": 63488 00:16:44.497 }, 00:16:44.497 { 00:16:44.497 "name": "BaseBdev3", 00:16:44.497 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:44.497 "is_configured": true, 00:16:44.497 "data_offset": 2048, 00:16:44.497 "data_size": 63488 00:16:44.497 }, 00:16:44.497 { 00:16:44.497 "name": "BaseBdev4", 00:16:44.497 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:44.497 "is_configured": true, 00:16:44.497 "data_offset": 2048, 00:16:44.497 "data_size": 63488 00:16:44.497 } 00:16:44.497 ] 00:16:44.497 }' 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.497 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.757 [2024-11-26 21:24:02.663771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.757 [2024-11-26 21:24:02.719623] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.757 [2024-11-26 21:24:02.719690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.757 [2024-11-26 21:24:02.719707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.757 [2024-11-26 21:24:02.719717] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.757 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.757 "name": "raid_bdev1", 00:16:44.757 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:44.757 "strip_size_kb": 64, 00:16:44.757 "state": "online", 00:16:44.757 "raid_level": "raid5f", 00:16:44.757 "superblock": true, 00:16:44.757 "num_base_bdevs": 4, 00:16:44.757 "num_base_bdevs_discovered": 3, 00:16:44.757 "num_base_bdevs_operational": 3, 00:16:44.757 "base_bdevs_list": [ 00:16:44.757 { 00:16:44.757 "name": null, 00:16:44.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.757 "is_configured": false, 00:16:44.757 "data_offset": 0, 00:16:44.757 "data_size": 63488 00:16:44.757 }, 00:16:44.757 { 00:16:44.757 "name": "BaseBdev2", 00:16:44.757 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:44.757 "is_configured": true, 00:16:44.757 "data_offset": 2048, 00:16:44.757 "data_size": 63488 00:16:44.757 }, 00:16:44.757 { 00:16:44.757 "name": "BaseBdev3", 00:16:44.757 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:44.757 "is_configured": true, 00:16:44.758 "data_offset": 2048, 00:16:44.758 "data_size": 63488 00:16:44.758 }, 00:16:44.758 { 00:16:44.758 "name": "BaseBdev4", 00:16:44.758 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:44.758 "is_configured": true, 00:16:44.758 "data_offset": 2048, 00:16:44.758 "data_size": 63488 00:16:44.758 } 00:16:44.758 ] 00:16:44.758 }' 00:16:44.758 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.758 21:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.327 21:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.327 21:24:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.327 21:24:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.327 [2024-11-26 21:24:03.180679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.327 [2024-11-26 21:24:03.180746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.327 [2024-11-26 21:24:03.180775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:45.327 [2024-11-26 21:24:03.180789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.327 [2024-11-26 21:24:03.181336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.327 [2024-11-26 21:24:03.181367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.327 [2024-11-26 21:24:03.181464] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:45.327 [2024-11-26 21:24:03.181487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.327 [2024-11-26 21:24:03.181498] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.327 [2024-11-26 21:24:03.181537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.327 [2024-11-26 21:24:03.195439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:45.327 spare 00:16:45.327 21:24:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.327 21:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:45.327 [2024-11-26 21:24:03.204123] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.265 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.265 "name": "raid_bdev1", 00:16:46.265 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:46.265 "strip_size_kb": 64, 00:16:46.265 "state": "online", 00:16:46.265 "raid_level": "raid5f", 00:16:46.265 "superblock": true, 00:16:46.265 "num_base_bdevs": 4, 00:16:46.265 "num_base_bdevs_discovered": 4, 00:16:46.265 "num_base_bdevs_operational": 4, 00:16:46.265 "process": { 00:16:46.265 "type": "rebuild", 00:16:46.265 "target": "spare", 00:16:46.265 "progress": { 00:16:46.265 "blocks": 19200, 00:16:46.265 "percent": 10 00:16:46.265 } 00:16:46.265 }, 00:16:46.265 "base_bdevs_list": [ 00:16:46.265 { 00:16:46.265 "name": "spare", 00:16:46.265 "uuid": "5b956257-90fe-5d07-8557-df17182d381d", 00:16:46.265 "is_configured": true, 00:16:46.265 "data_offset": 2048, 00:16:46.265 "data_size": 63488 00:16:46.265 }, 00:16:46.265 { 00:16:46.265 "name": "BaseBdev2", 00:16:46.265 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:46.265 "is_configured": true, 00:16:46.265 "data_offset": 2048, 00:16:46.265 "data_size": 63488 00:16:46.265 }, 00:16:46.265 { 00:16:46.265 "name": "BaseBdev3", 00:16:46.265 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:46.265 "is_configured": true, 00:16:46.265 "data_offset": 2048, 00:16:46.265 "data_size": 63488 00:16:46.265 }, 00:16:46.265 { 00:16:46.265 "name": "BaseBdev4", 00:16:46.265 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:46.266 "is_configured": true, 00:16:46.266 "data_offset": 2048, 00:16:46.266 "data_size": 63488 00:16:46.266 } 00:16:46.266 ] 00:16:46.266 }' 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.266 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.266 [2024-11-26 21:24:04.347047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.266 [2024-11-26 21:24:04.411037] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.266 [2024-11-26 21:24:04.411089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.266 [2024-11-26 21:24:04.411109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.266 [2024-11-26 21:24:04.411117] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.526 "name": "raid_bdev1", 00:16:46.526 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:46.526 "strip_size_kb": 64, 00:16:46.526 "state": "online", 00:16:46.526 "raid_level": "raid5f", 00:16:46.526 "superblock": true, 00:16:46.526 "num_base_bdevs": 4, 00:16:46.526 "num_base_bdevs_discovered": 3, 00:16:46.526 "num_base_bdevs_operational": 3, 00:16:46.526 "base_bdevs_list": [ 00:16:46.526 { 00:16:46.526 "name": null, 00:16:46.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.526 "is_configured": false, 00:16:46.526 "data_offset": 0, 00:16:46.526 "data_size": 63488 00:16:46.526 }, 00:16:46.526 { 00:16:46.526 "name": "BaseBdev2", 00:16:46.526 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:46.526 "is_configured": true, 00:16:46.526 "data_offset": 2048, 00:16:46.526 "data_size": 63488 00:16:46.526 }, 00:16:46.526 { 00:16:46.526 "name": "BaseBdev3", 00:16:46.526 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:46.526 "is_configured": true, 00:16:46.526 "data_offset": 2048, 00:16:46.526 "data_size": 63488 00:16:46.526 }, 00:16:46.526 { 00:16:46.526 "name": "BaseBdev4", 00:16:46.526 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:46.526 "is_configured": true, 00:16:46.526 "data_offset": 2048, 00:16:46.526 "data_size": 63488 00:16:46.526 } 00:16:46.526 ] 00:16:46.526 }' 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.526 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.786 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.046 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.046 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.046 "name": "raid_bdev1", 00:16:47.046 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:47.046 "strip_size_kb": 64, 00:16:47.046 "state": "online", 00:16:47.046 "raid_level": "raid5f", 00:16:47.046 "superblock": true, 00:16:47.046 "num_base_bdevs": 4, 00:16:47.046 "num_base_bdevs_discovered": 3, 00:16:47.046 "num_base_bdevs_operational": 3, 00:16:47.046 "base_bdevs_list": [ 00:16:47.046 { 00:16:47.046 "name": null, 00:16:47.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.046 "is_configured": false, 00:16:47.046 "data_offset": 0, 00:16:47.046 "data_size": 63488 00:16:47.046 }, 00:16:47.046 { 00:16:47.046 "name": "BaseBdev2", 00:16:47.046 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:47.046 "is_configured": true, 00:16:47.046 "data_offset": 2048, 00:16:47.046 "data_size": 63488 00:16:47.046 }, 00:16:47.046 { 00:16:47.046 "name": "BaseBdev3", 00:16:47.046 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:47.046 "is_configured": true, 00:16:47.046 "data_offset": 2048, 00:16:47.046 "data_size": 63488 00:16:47.046 }, 00:16:47.046 { 00:16:47.046 "name": "BaseBdev4", 00:16:47.046 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:47.046 "is_configured": true, 00:16:47.046 "data_offset": 2048, 00:16:47.046 "data_size": 63488 00:16:47.046 } 00:16:47.046 ] 00:16:47.046 }' 00:16:47.046 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.046 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.046 21:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.046 [2024-11-26 21:24:05.055165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.046 [2024-11-26 21:24:05.055221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.046 [2024-11-26 21:24:05.055263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:47.046 [2024-11-26 21:24:05.055272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.046 [2024-11-26 21:24:05.055788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.046 [2024-11-26 21:24:05.055820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.046 [2024-11-26 21:24:05.055905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:47.046 [2024-11-26 21:24:05.055920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.046 [2024-11-26 21:24:05.055934] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:47.046 [2024-11-26 21:24:05.055946] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:47.046 BaseBdev1 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.046 21:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.985 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.985 "name": "raid_bdev1", 00:16:47.985 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:47.985 "strip_size_kb": 64, 00:16:47.985 "state": "online", 00:16:47.985 "raid_level": "raid5f", 00:16:47.985 "superblock": true, 00:16:47.985 "num_base_bdevs": 4, 00:16:47.985 "num_base_bdevs_discovered": 3, 00:16:47.985 "num_base_bdevs_operational": 3, 00:16:47.985 "base_bdevs_list": [ 00:16:47.985 { 00:16:47.985 "name": null, 00:16:47.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.985 "is_configured": false, 00:16:47.985 "data_offset": 0, 00:16:47.985 "data_size": 63488 00:16:47.985 }, 00:16:47.985 { 00:16:47.985 "name": "BaseBdev2", 00:16:47.986 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 }, 00:16:47.986 { 00:16:47.986 "name": "BaseBdev3", 00:16:47.986 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 }, 00:16:47.986 { 00:16:47.986 "name": "BaseBdev4", 00:16:47.986 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:47.986 "is_configured": true, 00:16:47.986 "data_offset": 2048, 00:16:47.986 "data_size": 63488 00:16:47.986 } 00:16:47.986 ] 00:16:47.986 }' 00:16:47.986 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.986 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.556 "name": "raid_bdev1", 00:16:48.556 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:48.556 "strip_size_kb": 64, 00:16:48.556 "state": "online", 00:16:48.556 "raid_level": "raid5f", 00:16:48.556 "superblock": true, 00:16:48.556 "num_base_bdevs": 4, 00:16:48.556 "num_base_bdevs_discovered": 3, 00:16:48.556 "num_base_bdevs_operational": 3, 00:16:48.556 "base_bdevs_list": [ 00:16:48.556 { 00:16:48.556 "name": null, 00:16:48.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.556 "is_configured": false, 00:16:48.556 "data_offset": 0, 00:16:48.556 "data_size": 63488 00:16:48.556 }, 00:16:48.556 { 00:16:48.556 "name": "BaseBdev2", 00:16:48.556 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:48.556 "is_configured": true, 00:16:48.556 "data_offset": 2048, 00:16:48.556 "data_size": 63488 00:16:48.556 }, 00:16:48.556 { 00:16:48.556 "name": "BaseBdev3", 00:16:48.556 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:48.556 "is_configured": true, 00:16:48.556 "data_offset": 2048, 00:16:48.556 "data_size": 63488 00:16:48.556 }, 00:16:48.556 { 00:16:48.556 "name": "BaseBdev4", 00:16:48.556 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:48.556 "is_configured": true, 00:16:48.556 "data_offset": 2048, 00:16:48.556 "data_size": 63488 00:16:48.556 } 00:16:48.556 ] 00:16:48.556 }' 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.556 [2024-11-26 21:24:06.624510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.556 [2024-11-26 21:24:06.624704] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.556 [2024-11-26 21:24:06.624721] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.556 request: 00:16:48.556 { 00:16:48.556 "base_bdev": "BaseBdev1", 00:16:48.556 "raid_bdev": "raid_bdev1", 00:16:48.556 "method": "bdev_raid_add_base_bdev", 00:16:48.556 "req_id": 1 00:16:48.556 } 00:16:48.556 Got JSON-RPC error response 00:16:48.556 response: 00:16:48.556 { 00:16:48.556 "code": -22, 00:16:48.556 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:48.556 } 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.556 21:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.496 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.756 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.756 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.756 "name": "raid_bdev1", 00:16:49.756 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:49.756 "strip_size_kb": 64, 00:16:49.756 "state": "online", 00:16:49.756 "raid_level": "raid5f", 00:16:49.756 "superblock": true, 00:16:49.756 "num_base_bdevs": 4, 00:16:49.756 "num_base_bdevs_discovered": 3, 00:16:49.756 "num_base_bdevs_operational": 3, 00:16:49.756 "base_bdevs_list": [ 00:16:49.756 { 00:16:49.756 "name": null, 00:16:49.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.756 "is_configured": false, 00:16:49.756 "data_offset": 0, 00:16:49.756 "data_size": 63488 00:16:49.756 }, 00:16:49.756 { 00:16:49.756 "name": "BaseBdev2", 00:16:49.756 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:49.756 "is_configured": true, 00:16:49.756 "data_offset": 2048, 00:16:49.756 "data_size": 63488 00:16:49.756 }, 00:16:49.756 { 00:16:49.756 "name": "BaseBdev3", 00:16:49.757 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:49.757 "is_configured": true, 00:16:49.757 "data_offset": 2048, 00:16:49.757 "data_size": 63488 00:16:49.757 }, 00:16:49.757 { 00:16:49.757 "name": "BaseBdev4", 00:16:49.757 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:49.757 "is_configured": true, 00:16:49.757 "data_offset": 2048, 00:16:49.757 "data_size": 63488 00:16:49.757 } 00:16:49.757 ] 00:16:49.757 }' 00:16:49.757 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.757 21:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.017 "name": "raid_bdev1", 00:16:50.017 "uuid": "2b920fb2-101d-4c56-a7e3-d74c1f08adeb", 00:16:50.017 "strip_size_kb": 64, 00:16:50.017 "state": "online", 00:16:50.017 "raid_level": "raid5f", 00:16:50.017 "superblock": true, 00:16:50.017 "num_base_bdevs": 4, 00:16:50.017 "num_base_bdevs_discovered": 3, 00:16:50.017 "num_base_bdevs_operational": 3, 00:16:50.017 "base_bdevs_list": [ 00:16:50.017 { 00:16:50.017 "name": null, 00:16:50.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.017 "is_configured": false, 00:16:50.017 "data_offset": 0, 00:16:50.017 "data_size": 63488 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "name": "BaseBdev2", 00:16:50.017 "uuid": "a3dc4a79-501e-5fdf-abdd-867b3cf0f9aa", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 2048, 00:16:50.017 "data_size": 63488 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "name": "BaseBdev3", 00:16:50.017 "uuid": "fba6d4bc-be97-5033-9d9e-e8ed4d8d43e2", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 2048, 00:16:50.017 "data_size": 63488 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "name": "BaseBdev4", 00:16:50.017 "uuid": "2519b52f-aab2-5500-9015-f4b87f482961", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 2048, 00:16:50.017 "data_size": 63488 00:16:50.017 } 00:16:50.017 ] 00:16:50.017 }' 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84921 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84921 ']' 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84921 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.017 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84921 00:16:50.277 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.278 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.278 killing process with pid 84921 00:16:50.278 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84921' 00:16:50.278 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84921 00:16:50.278 Received shutdown signal, test time was about 60.000000 seconds 00:16:50.278 00:16:50.278 Latency(us) 00:16:50.278 [2024-11-26T21:24:08.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.278 [2024-11-26T21:24:08.434Z] =================================================================================================================== 00:16:50.278 [2024-11-26T21:24:08.434Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.278 [2024-11-26 21:24:08.187885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.278 [2024-11-26 21:24:08.188038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.278 21:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84921 00:16:50.278 [2024-11-26 21:24:08.188127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.278 [2024-11-26 21:24:08.188149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:50.848 [2024-11-26 21:24:08.693241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.788 21:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:51.788 00:16:51.788 real 0m26.656s 00:16:51.788 user 0m32.980s 00:16:51.788 sys 0m3.080s 00:16:51.788 21:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.788 21:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.788 ************************************ 00:16:51.788 END TEST raid5f_rebuild_test_sb 00:16:51.788 ************************************ 00:16:51.788 21:24:09 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:51.788 21:24:09 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:51.788 21:24:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:51.788 21:24:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.788 21:24:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.788 ************************************ 00:16:51.788 START TEST raid_state_function_test_sb_4k 00:16:51.788 ************************************ 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:51.788 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85726 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:51.789 Process raid pid: 85726 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85726' 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85726 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85726 ']' 00:16:51.789 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.049 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.049 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.049 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.049 21:24:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.049 [2024-11-26 21:24:10.033301] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:52.049 [2024-11-26 21:24:10.033430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.309 [2024-11-26 21:24:10.217293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.309 [2024-11-26 21:24:10.355659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.569 [2024-11-26 21:24:10.587096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.569 [2024-11-26 21:24:10.587135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.829 [2024-11-26 21:24:10.845067] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.829 [2024-11-26 21:24:10.845118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.829 [2024-11-26 21:24:10.845127] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.829 [2024-11-26 21:24:10.845138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.829 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.829 "name": "Existed_Raid", 00:16:52.829 "uuid": "9b57b271-aaf8-4ba0-a651-13a401c6d135", 00:16:52.829 "strip_size_kb": 0, 00:16:52.829 "state": "configuring", 00:16:52.829 "raid_level": "raid1", 00:16:52.829 "superblock": true, 00:16:52.829 "num_base_bdevs": 2, 00:16:52.829 "num_base_bdevs_discovered": 0, 00:16:52.829 "num_base_bdevs_operational": 2, 00:16:52.829 "base_bdevs_list": [ 00:16:52.829 { 00:16:52.829 "name": "BaseBdev1", 00:16:52.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.829 "is_configured": false, 00:16:52.829 "data_offset": 0, 00:16:52.829 "data_size": 0 00:16:52.829 }, 00:16:52.829 { 00:16:52.829 "name": "BaseBdev2", 00:16:52.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.829 "is_configured": false, 00:16:52.829 "data_offset": 0, 00:16:52.829 "data_size": 0 00:16:52.829 } 00:16:52.830 ] 00:16:52.830 }' 00:16:52.830 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.830 21:24:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.399 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 [2024-11-26 21:24:11.264265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.400 [2024-11-26 21:24:11.264299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 [2024-11-26 21:24:11.276261] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.400 [2024-11-26 21:24:11.276294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.400 [2024-11-26 21:24:11.276302] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.400 [2024-11-26 21:24:11.276314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 [2024-11-26 21:24:11.327869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.400 BaseBdev1 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 [ 00:16:53.400 { 00:16:53.400 "name": "BaseBdev1", 00:16:53.400 "aliases": [ 00:16:53.400 "2e1b3c63-eabb-48ad-9075-f8828ba82374" 00:16:53.400 ], 00:16:53.400 "product_name": "Malloc disk", 00:16:53.400 "block_size": 4096, 00:16:53.400 "num_blocks": 8192, 00:16:53.400 "uuid": "2e1b3c63-eabb-48ad-9075-f8828ba82374", 00:16:53.400 "assigned_rate_limits": { 00:16:53.400 "rw_ios_per_sec": 0, 00:16:53.400 "rw_mbytes_per_sec": 0, 00:16:53.400 "r_mbytes_per_sec": 0, 00:16:53.400 "w_mbytes_per_sec": 0 00:16:53.400 }, 00:16:53.400 "claimed": true, 00:16:53.400 "claim_type": "exclusive_write", 00:16:53.400 "zoned": false, 00:16:53.400 "supported_io_types": { 00:16:53.400 "read": true, 00:16:53.400 "write": true, 00:16:53.400 "unmap": true, 00:16:53.400 "flush": true, 00:16:53.400 "reset": true, 00:16:53.400 "nvme_admin": false, 00:16:53.400 "nvme_io": false, 00:16:53.400 "nvme_io_md": false, 00:16:53.400 "write_zeroes": true, 00:16:53.400 "zcopy": true, 00:16:53.400 "get_zone_info": false, 00:16:53.400 "zone_management": false, 00:16:53.400 "zone_append": false, 00:16:53.400 "compare": false, 00:16:53.400 "compare_and_write": false, 00:16:53.400 "abort": true, 00:16:53.400 "seek_hole": false, 00:16:53.400 "seek_data": false, 00:16:53.400 "copy": true, 00:16:53.400 "nvme_iov_md": false 00:16:53.400 }, 00:16:53.400 "memory_domains": [ 00:16:53.400 { 00:16:53.400 "dma_device_id": "system", 00:16:53.400 "dma_device_type": 1 00:16:53.400 }, 00:16:53.400 { 00:16:53.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.400 "dma_device_type": 2 00:16:53.400 } 00:16:53.400 ], 00:16:53.400 "driver_specific": {} 00:16:53.400 } 00:16:53.400 ] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.400 "name": "Existed_Raid", 00:16:53.400 "uuid": "2e2f5257-0d8d-49c7-9230-75866fd7e867", 00:16:53.400 "strip_size_kb": 0, 00:16:53.400 "state": "configuring", 00:16:53.400 "raid_level": "raid1", 00:16:53.400 "superblock": true, 00:16:53.400 "num_base_bdevs": 2, 00:16:53.400 "num_base_bdevs_discovered": 1, 00:16:53.400 "num_base_bdevs_operational": 2, 00:16:53.400 "base_bdevs_list": [ 00:16:53.400 { 00:16:53.400 "name": "BaseBdev1", 00:16:53.400 "uuid": "2e1b3c63-eabb-48ad-9075-f8828ba82374", 00:16:53.400 "is_configured": true, 00:16:53.400 "data_offset": 256, 00:16:53.400 "data_size": 7936 00:16:53.400 }, 00:16:53.400 { 00:16:53.400 "name": "BaseBdev2", 00:16:53.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.400 "is_configured": false, 00:16:53.400 "data_offset": 0, 00:16:53.400 "data_size": 0 00:16:53.400 } 00:16:53.400 ] 00:16:53.400 }' 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.400 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.660 [2024-11-26 21:24:11.759122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.660 [2024-11-26 21:24:11.759163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.660 [2024-11-26 21:24:11.771149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.660 [2024-11-26 21:24:11.773191] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.660 [2024-11-26 21:24:11.773226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.660 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.661 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.920 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.920 "name": "Existed_Raid", 00:16:53.920 "uuid": "e8cd0f4a-d27c-4939-9cca-0ad52dfc67d8", 00:16:53.920 "strip_size_kb": 0, 00:16:53.920 "state": "configuring", 00:16:53.920 "raid_level": "raid1", 00:16:53.920 "superblock": true, 00:16:53.920 "num_base_bdevs": 2, 00:16:53.920 "num_base_bdevs_discovered": 1, 00:16:53.920 "num_base_bdevs_operational": 2, 00:16:53.920 "base_bdevs_list": [ 00:16:53.920 { 00:16:53.920 "name": "BaseBdev1", 00:16:53.920 "uuid": "2e1b3c63-eabb-48ad-9075-f8828ba82374", 00:16:53.920 "is_configured": true, 00:16:53.920 "data_offset": 256, 00:16:53.920 "data_size": 7936 00:16:53.920 }, 00:16:53.920 { 00:16:53.920 "name": "BaseBdev2", 00:16:53.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.920 "is_configured": false, 00:16:53.920 "data_offset": 0, 00:16:53.920 "data_size": 0 00:16:53.920 } 00:16:53.920 ] 00:16:53.920 }' 00:16:53.920 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.920 21:24:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 [2024-11-26 21:24:12.215583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.181 [2024-11-26 21:24:12.215835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:54.181 [2024-11-26 21:24:12.215850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.181 [2024-11-26 21:24:12.216147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:54.181 [2024-11-26 21:24:12.216348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:54.181 [2024-11-26 21:24:12.216370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:54.181 BaseBdev2 00:16:54.181 [2024-11-26 21:24:12.216533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.181 [ 00:16:54.181 { 00:16:54.181 "name": "BaseBdev2", 00:16:54.181 "aliases": [ 00:16:54.181 "a571f5a3-7819-4cbd-8cd4-ae6a27dd829f" 00:16:54.181 ], 00:16:54.181 "product_name": "Malloc disk", 00:16:54.181 "block_size": 4096, 00:16:54.181 "num_blocks": 8192, 00:16:54.181 "uuid": "a571f5a3-7819-4cbd-8cd4-ae6a27dd829f", 00:16:54.181 "assigned_rate_limits": { 00:16:54.181 "rw_ios_per_sec": 0, 00:16:54.181 "rw_mbytes_per_sec": 0, 00:16:54.181 "r_mbytes_per_sec": 0, 00:16:54.181 "w_mbytes_per_sec": 0 00:16:54.181 }, 00:16:54.181 "claimed": true, 00:16:54.181 "claim_type": "exclusive_write", 00:16:54.181 "zoned": false, 00:16:54.181 "supported_io_types": { 00:16:54.181 "read": true, 00:16:54.181 "write": true, 00:16:54.181 "unmap": true, 00:16:54.181 "flush": true, 00:16:54.181 "reset": true, 00:16:54.181 "nvme_admin": false, 00:16:54.181 "nvme_io": false, 00:16:54.181 "nvme_io_md": false, 00:16:54.181 "write_zeroes": true, 00:16:54.181 "zcopy": true, 00:16:54.181 "get_zone_info": false, 00:16:54.181 "zone_management": false, 00:16:54.181 "zone_append": false, 00:16:54.181 "compare": false, 00:16:54.181 "compare_and_write": false, 00:16:54.181 "abort": true, 00:16:54.181 "seek_hole": false, 00:16:54.181 "seek_data": false, 00:16:54.181 "copy": true, 00:16:54.181 "nvme_iov_md": false 00:16:54.181 }, 00:16:54.181 "memory_domains": [ 00:16:54.181 { 00:16:54.181 "dma_device_id": "system", 00:16:54.181 "dma_device_type": 1 00:16:54.181 }, 00:16:54.181 { 00:16:54.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.181 "dma_device_type": 2 00:16:54.181 } 00:16:54.181 ], 00:16:54.181 "driver_specific": {} 00:16:54.181 } 00:16:54.181 ] 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.181 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.182 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.182 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.182 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.182 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.182 "name": "Existed_Raid", 00:16:54.182 "uuid": "e8cd0f4a-d27c-4939-9cca-0ad52dfc67d8", 00:16:54.182 "strip_size_kb": 0, 00:16:54.182 "state": "online", 00:16:54.182 "raid_level": "raid1", 00:16:54.182 "superblock": true, 00:16:54.182 "num_base_bdevs": 2, 00:16:54.182 "num_base_bdevs_discovered": 2, 00:16:54.182 "num_base_bdevs_operational": 2, 00:16:54.182 "base_bdevs_list": [ 00:16:54.182 { 00:16:54.182 "name": "BaseBdev1", 00:16:54.182 "uuid": "2e1b3c63-eabb-48ad-9075-f8828ba82374", 00:16:54.182 "is_configured": true, 00:16:54.182 "data_offset": 256, 00:16:54.182 "data_size": 7936 00:16:54.182 }, 00:16:54.182 { 00:16:54.182 "name": "BaseBdev2", 00:16:54.182 "uuid": "a571f5a3-7819-4cbd-8cd4-ae6a27dd829f", 00:16:54.182 "is_configured": true, 00:16:54.182 "data_offset": 256, 00:16:54.182 "data_size": 7936 00:16:54.182 } 00:16:54.182 ] 00:16:54.182 }' 00:16:54.182 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.182 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.752 [2024-11-26 21:24:12.687177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.752 "name": "Existed_Raid", 00:16:54.752 "aliases": [ 00:16:54.752 "e8cd0f4a-d27c-4939-9cca-0ad52dfc67d8" 00:16:54.752 ], 00:16:54.752 "product_name": "Raid Volume", 00:16:54.752 "block_size": 4096, 00:16:54.752 "num_blocks": 7936, 00:16:54.752 "uuid": "e8cd0f4a-d27c-4939-9cca-0ad52dfc67d8", 00:16:54.752 "assigned_rate_limits": { 00:16:54.752 "rw_ios_per_sec": 0, 00:16:54.752 "rw_mbytes_per_sec": 0, 00:16:54.752 "r_mbytes_per_sec": 0, 00:16:54.752 "w_mbytes_per_sec": 0 00:16:54.752 }, 00:16:54.752 "claimed": false, 00:16:54.752 "zoned": false, 00:16:54.752 "supported_io_types": { 00:16:54.752 "read": true, 00:16:54.752 "write": true, 00:16:54.752 "unmap": false, 00:16:54.752 "flush": false, 00:16:54.752 "reset": true, 00:16:54.752 "nvme_admin": false, 00:16:54.752 "nvme_io": false, 00:16:54.752 "nvme_io_md": false, 00:16:54.752 "write_zeroes": true, 00:16:54.752 "zcopy": false, 00:16:54.752 "get_zone_info": false, 00:16:54.752 "zone_management": false, 00:16:54.752 "zone_append": false, 00:16:54.752 "compare": false, 00:16:54.752 "compare_and_write": false, 00:16:54.752 "abort": false, 00:16:54.752 "seek_hole": false, 00:16:54.752 "seek_data": false, 00:16:54.752 "copy": false, 00:16:54.752 "nvme_iov_md": false 00:16:54.752 }, 00:16:54.752 "memory_domains": [ 00:16:54.752 { 00:16:54.752 "dma_device_id": "system", 00:16:54.752 "dma_device_type": 1 00:16:54.752 }, 00:16:54.752 { 00:16:54.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.752 "dma_device_type": 2 00:16:54.752 }, 00:16:54.752 { 00:16:54.752 "dma_device_id": "system", 00:16:54.752 "dma_device_type": 1 00:16:54.752 }, 00:16:54.752 { 00:16:54.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.752 "dma_device_type": 2 00:16:54.752 } 00:16:54.752 ], 00:16:54.752 "driver_specific": { 00:16:54.752 "raid": { 00:16:54.752 "uuid": "e8cd0f4a-d27c-4939-9cca-0ad52dfc67d8", 00:16:54.752 "strip_size_kb": 0, 00:16:54.752 "state": "online", 00:16:54.752 "raid_level": "raid1", 00:16:54.752 "superblock": true, 00:16:54.752 "num_base_bdevs": 2, 00:16:54.752 "num_base_bdevs_discovered": 2, 00:16:54.752 "num_base_bdevs_operational": 2, 00:16:54.752 "base_bdevs_list": [ 00:16:54.752 { 00:16:54.752 "name": "BaseBdev1", 00:16:54.752 "uuid": "2e1b3c63-eabb-48ad-9075-f8828ba82374", 00:16:54.752 "is_configured": true, 00:16:54.752 "data_offset": 256, 00:16:54.752 "data_size": 7936 00:16:54.752 }, 00:16:54.752 { 00:16:54.752 "name": "BaseBdev2", 00:16:54.752 "uuid": "a571f5a3-7819-4cbd-8cd4-ae6a27dd829f", 00:16:54.752 "is_configured": true, 00:16:54.752 "data_offset": 256, 00:16:54.752 "data_size": 7936 00:16:54.752 } 00:16:54.752 ] 00:16:54.752 } 00:16:54.752 } 00:16:54.752 }' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:54.752 BaseBdev2' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.752 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.752 [2024-11-26 21:24:12.886478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.013 21:24:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.013 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.013 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.013 "name": "Existed_Raid", 00:16:55.013 "uuid": "e8cd0f4a-d27c-4939-9cca-0ad52dfc67d8", 00:16:55.013 "strip_size_kb": 0, 00:16:55.013 "state": "online", 00:16:55.013 "raid_level": "raid1", 00:16:55.013 "superblock": true, 00:16:55.013 "num_base_bdevs": 2, 00:16:55.013 "num_base_bdevs_discovered": 1, 00:16:55.014 "num_base_bdevs_operational": 1, 00:16:55.014 "base_bdevs_list": [ 00:16:55.014 { 00:16:55.014 "name": null, 00:16:55.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.014 "is_configured": false, 00:16:55.014 "data_offset": 0, 00:16:55.014 "data_size": 7936 00:16:55.014 }, 00:16:55.014 { 00:16:55.014 "name": "BaseBdev2", 00:16:55.014 "uuid": "a571f5a3-7819-4cbd-8cd4-ae6a27dd829f", 00:16:55.014 "is_configured": true, 00:16:55.014 "data_offset": 256, 00:16:55.014 "data_size": 7936 00:16:55.014 } 00:16:55.014 ] 00:16:55.014 }' 00:16:55.014 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.014 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.274 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.274 [2024-11-26 21:24:13.425314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.274 [2024-11-26 21:24:13.425433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.534 [2024-11-26 21:24:13.524256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.534 [2024-11-26 21:24:13.524315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.534 [2024-11-26 21:24:13.524328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85726 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85726 ']' 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85726 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85726 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.534 killing process with pid 85726 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85726' 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85726 00:16:55.534 [2024-11-26 21:24:13.609513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.534 21:24:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85726 00:16:55.534 [2024-11-26 21:24:13.626079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.917 21:24:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:56.917 00:16:56.917 real 0m4.871s 00:16:56.917 user 0m6.780s 00:16:56.917 sys 0m0.931s 00:16:56.917 21:24:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.917 21:24:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.917 ************************************ 00:16:56.917 END TEST raid_state_function_test_sb_4k 00:16:56.917 ************************************ 00:16:56.917 21:24:14 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:56.917 21:24:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.917 21:24:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.917 21:24:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.917 ************************************ 00:16:56.917 START TEST raid_superblock_test_4k 00:16:56.917 ************************************ 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85977 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85977 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85977 ']' 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.917 21:24:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.917 [2024-11-26 21:24:14.974107] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:56.917 [2024-11-26 21:24:14.974226] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85977 ] 00:16:57.178 [2024-11-26 21:24:15.150860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.178 [2024-11-26 21:24:15.276518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.438 [2024-11-26 21:24:15.504720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.438 [2024-11-26 21:24:15.504781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.699 malloc1 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.699 [2024-11-26 21:24:15.839230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.699 [2024-11-26 21:24:15.839289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.699 [2024-11-26 21:24:15.839312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.699 [2024-11-26 21:24:15.839322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.699 [2024-11-26 21:24:15.841681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.699 [2024-11-26 21:24:15.841713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.699 pt1 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.699 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.960 malloc2 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.960 [2024-11-26 21:24:15.899977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.960 [2024-11-26 21:24:15.900023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.960 [2024-11-26 21:24:15.900050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.960 [2024-11-26 21:24:15.900059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.960 [2024-11-26 21:24:15.902414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.960 [2024-11-26 21:24:15.902444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.960 pt2 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.960 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.960 [2024-11-26 21:24:15.912025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.960 [2024-11-26 21:24:15.914052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.960 [2024-11-26 21:24:15.914212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.960 [2024-11-26 21:24:15.914235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:57.960 [2024-11-26 21:24:15.914461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:57.960 [2024-11-26 21:24:15.914614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.960 [2024-11-26 21:24:15.914640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.961 [2024-11-26 21:24:15.914790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.961 "name": "raid_bdev1", 00:16:57.961 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:57.961 "strip_size_kb": 0, 00:16:57.961 "state": "online", 00:16:57.961 "raid_level": "raid1", 00:16:57.961 "superblock": true, 00:16:57.961 "num_base_bdevs": 2, 00:16:57.961 "num_base_bdevs_discovered": 2, 00:16:57.961 "num_base_bdevs_operational": 2, 00:16:57.961 "base_bdevs_list": [ 00:16:57.961 { 00:16:57.961 "name": "pt1", 00:16:57.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.961 "is_configured": true, 00:16:57.961 "data_offset": 256, 00:16:57.961 "data_size": 7936 00:16:57.961 }, 00:16:57.961 { 00:16:57.961 "name": "pt2", 00:16:57.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.961 "is_configured": true, 00:16:57.961 "data_offset": 256, 00:16:57.961 "data_size": 7936 00:16:57.961 } 00:16:57.961 ] 00:16:57.961 }' 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.961 21:24:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.221 [2024-11-26 21:24:16.343437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.221 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.481 "name": "raid_bdev1", 00:16:58.481 "aliases": [ 00:16:58.481 "ccc0ff46-3238-4af4-b580-a0382848a495" 00:16:58.481 ], 00:16:58.481 "product_name": "Raid Volume", 00:16:58.481 "block_size": 4096, 00:16:58.481 "num_blocks": 7936, 00:16:58.481 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:58.481 "assigned_rate_limits": { 00:16:58.481 "rw_ios_per_sec": 0, 00:16:58.481 "rw_mbytes_per_sec": 0, 00:16:58.481 "r_mbytes_per_sec": 0, 00:16:58.481 "w_mbytes_per_sec": 0 00:16:58.481 }, 00:16:58.481 "claimed": false, 00:16:58.481 "zoned": false, 00:16:58.481 "supported_io_types": { 00:16:58.481 "read": true, 00:16:58.481 "write": true, 00:16:58.481 "unmap": false, 00:16:58.481 "flush": false, 00:16:58.481 "reset": true, 00:16:58.481 "nvme_admin": false, 00:16:58.481 "nvme_io": false, 00:16:58.481 "nvme_io_md": false, 00:16:58.481 "write_zeroes": true, 00:16:58.481 "zcopy": false, 00:16:58.481 "get_zone_info": false, 00:16:58.481 "zone_management": false, 00:16:58.481 "zone_append": false, 00:16:58.481 "compare": false, 00:16:58.481 "compare_and_write": false, 00:16:58.481 "abort": false, 00:16:58.481 "seek_hole": false, 00:16:58.481 "seek_data": false, 00:16:58.481 "copy": false, 00:16:58.481 "nvme_iov_md": false 00:16:58.481 }, 00:16:58.481 "memory_domains": [ 00:16:58.481 { 00:16:58.481 "dma_device_id": "system", 00:16:58.481 "dma_device_type": 1 00:16:58.481 }, 00:16:58.481 { 00:16:58.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.481 "dma_device_type": 2 00:16:58.481 }, 00:16:58.481 { 00:16:58.481 "dma_device_id": "system", 00:16:58.481 "dma_device_type": 1 00:16:58.481 }, 00:16:58.481 { 00:16:58.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.481 "dma_device_type": 2 00:16:58.481 } 00:16:58.481 ], 00:16:58.481 "driver_specific": { 00:16:58.481 "raid": { 00:16:58.481 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:58.481 "strip_size_kb": 0, 00:16:58.481 "state": "online", 00:16:58.481 "raid_level": "raid1", 00:16:58.481 "superblock": true, 00:16:58.481 "num_base_bdevs": 2, 00:16:58.481 "num_base_bdevs_discovered": 2, 00:16:58.481 "num_base_bdevs_operational": 2, 00:16:58.481 "base_bdevs_list": [ 00:16:58.481 { 00:16:58.481 "name": "pt1", 00:16:58.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.481 "is_configured": true, 00:16:58.481 "data_offset": 256, 00:16:58.481 "data_size": 7936 00:16:58.481 }, 00:16:58.481 { 00:16:58.481 "name": "pt2", 00:16:58.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.481 "is_configured": true, 00:16:58.481 "data_offset": 256, 00:16:58.481 "data_size": 7936 00:16:58.481 } 00:16:58.481 ] 00:16:58.481 } 00:16:58.481 } 00:16:58.481 }' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.481 pt2' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.481 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.482 [2024-11-26 21:24:16.559063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ccc0ff46-3238-4af4-b580-a0382848a495 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z ccc0ff46-3238-4af4-b580-a0382848a495 ']' 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.482 [2024-11-26 21:24:16.586743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.482 [2024-11-26 21:24:16.586764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.482 [2024-11-26 21:24:16.586834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.482 [2024-11-26 21:24:16.586891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.482 [2024-11-26 21:24:16.586908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.482 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 [2024-11-26 21:24:16.710561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.742 [2024-11-26 21:24:16.712603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.742 [2024-11-26 21:24:16.712667] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:58.742 [2024-11-26 21:24:16.712710] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:58.742 [2024-11-26 21:24:16.712723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.742 [2024-11-26 21:24:16.712733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:58.742 request: 00:16:58.742 { 00:16:58.743 "name": "raid_bdev1", 00:16:58.743 "raid_level": "raid1", 00:16:58.743 "base_bdevs": [ 00:16:58.743 "malloc1", 00:16:58.743 "malloc2" 00:16:58.743 ], 00:16:58.743 "superblock": false, 00:16:58.743 "method": "bdev_raid_create", 00:16:58.743 "req_id": 1 00:16:58.743 } 00:16:58.743 Got JSON-RPC error response 00:16:58.743 response: 00:16:58.743 { 00:16:58.743 "code": -17, 00:16:58.743 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.743 } 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 [2024-11-26 21:24:16.778435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.743 [2024-11-26 21:24:16.778475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.743 [2024-11-26 21:24:16.778493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:58.743 [2024-11-26 21:24:16.778504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.743 [2024-11-26 21:24:16.780827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.743 [2024-11-26 21:24:16.780859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.743 [2024-11-26 21:24:16.780925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.743 [2024-11-26 21:24:16.780989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.743 pt1 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.743 "name": "raid_bdev1", 00:16:58.743 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:58.743 "strip_size_kb": 0, 00:16:58.743 "state": "configuring", 00:16:58.743 "raid_level": "raid1", 00:16:58.743 "superblock": true, 00:16:58.743 "num_base_bdevs": 2, 00:16:58.743 "num_base_bdevs_discovered": 1, 00:16:58.743 "num_base_bdevs_operational": 2, 00:16:58.743 "base_bdevs_list": [ 00:16:58.743 { 00:16:58.743 "name": "pt1", 00:16:58.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.743 "is_configured": true, 00:16:58.743 "data_offset": 256, 00:16:58.743 "data_size": 7936 00:16:58.743 }, 00:16:58.743 { 00:16:58.743 "name": null, 00:16:58.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.743 "is_configured": false, 00:16:58.743 "data_offset": 256, 00:16:58.743 "data_size": 7936 00:16:58.743 } 00:16:58.743 ] 00:16:58.743 }' 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.743 21:24:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.314 [2024-11-26 21:24:17.189734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.314 [2024-11-26 21:24:17.189782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.314 [2024-11-26 21:24:17.189798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:59.314 [2024-11-26 21:24:17.189808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.314 [2024-11-26 21:24:17.190186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.314 [2024-11-26 21:24:17.190206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.314 [2024-11-26 21:24:17.190261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.314 [2024-11-26 21:24:17.190285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.314 [2024-11-26 21:24:17.190404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:59.314 [2024-11-26 21:24:17.190415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:59.314 [2024-11-26 21:24:17.190655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:59.314 [2024-11-26 21:24:17.190809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:59.314 [2024-11-26 21:24:17.190817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:59.314 [2024-11-26 21:24:17.190932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.314 pt2 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.314 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.315 "name": "raid_bdev1", 00:16:59.315 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:59.315 "strip_size_kb": 0, 00:16:59.315 "state": "online", 00:16:59.315 "raid_level": "raid1", 00:16:59.315 "superblock": true, 00:16:59.315 "num_base_bdevs": 2, 00:16:59.315 "num_base_bdevs_discovered": 2, 00:16:59.315 "num_base_bdevs_operational": 2, 00:16:59.315 "base_bdevs_list": [ 00:16:59.315 { 00:16:59.315 "name": "pt1", 00:16:59.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.315 "is_configured": true, 00:16:59.315 "data_offset": 256, 00:16:59.315 "data_size": 7936 00:16:59.315 }, 00:16:59.315 { 00:16:59.315 "name": "pt2", 00:16:59.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.315 "is_configured": true, 00:16:59.315 "data_offset": 256, 00:16:59.315 "data_size": 7936 00:16:59.315 } 00:16:59.315 ] 00:16:59.315 }' 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.315 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.575 [2024-11-26 21:24:17.629183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.575 "name": "raid_bdev1", 00:16:59.575 "aliases": [ 00:16:59.575 "ccc0ff46-3238-4af4-b580-a0382848a495" 00:16:59.575 ], 00:16:59.575 "product_name": "Raid Volume", 00:16:59.575 "block_size": 4096, 00:16:59.575 "num_blocks": 7936, 00:16:59.575 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:59.575 "assigned_rate_limits": { 00:16:59.575 "rw_ios_per_sec": 0, 00:16:59.575 "rw_mbytes_per_sec": 0, 00:16:59.575 "r_mbytes_per_sec": 0, 00:16:59.575 "w_mbytes_per_sec": 0 00:16:59.575 }, 00:16:59.575 "claimed": false, 00:16:59.575 "zoned": false, 00:16:59.575 "supported_io_types": { 00:16:59.575 "read": true, 00:16:59.575 "write": true, 00:16:59.575 "unmap": false, 00:16:59.575 "flush": false, 00:16:59.575 "reset": true, 00:16:59.575 "nvme_admin": false, 00:16:59.575 "nvme_io": false, 00:16:59.575 "nvme_io_md": false, 00:16:59.575 "write_zeroes": true, 00:16:59.575 "zcopy": false, 00:16:59.575 "get_zone_info": false, 00:16:59.575 "zone_management": false, 00:16:59.575 "zone_append": false, 00:16:59.575 "compare": false, 00:16:59.575 "compare_and_write": false, 00:16:59.575 "abort": false, 00:16:59.575 "seek_hole": false, 00:16:59.575 "seek_data": false, 00:16:59.575 "copy": false, 00:16:59.575 "nvme_iov_md": false 00:16:59.575 }, 00:16:59.575 "memory_domains": [ 00:16:59.575 { 00:16:59.575 "dma_device_id": "system", 00:16:59.575 "dma_device_type": 1 00:16:59.575 }, 00:16:59.575 { 00:16:59.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.575 "dma_device_type": 2 00:16:59.575 }, 00:16:59.575 { 00:16:59.575 "dma_device_id": "system", 00:16:59.575 "dma_device_type": 1 00:16:59.575 }, 00:16:59.575 { 00:16:59.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.575 "dma_device_type": 2 00:16:59.575 } 00:16:59.575 ], 00:16:59.575 "driver_specific": { 00:16:59.575 "raid": { 00:16:59.575 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:59.575 "strip_size_kb": 0, 00:16:59.575 "state": "online", 00:16:59.575 "raid_level": "raid1", 00:16:59.575 "superblock": true, 00:16:59.575 "num_base_bdevs": 2, 00:16:59.575 "num_base_bdevs_discovered": 2, 00:16:59.575 "num_base_bdevs_operational": 2, 00:16:59.575 "base_bdevs_list": [ 00:16:59.575 { 00:16:59.575 "name": "pt1", 00:16:59.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.575 "is_configured": true, 00:16:59.575 "data_offset": 256, 00:16:59.575 "data_size": 7936 00:16:59.575 }, 00:16:59.575 { 00:16:59.575 "name": "pt2", 00:16:59.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.575 "is_configured": true, 00:16:59.575 "data_offset": 256, 00:16:59.575 "data_size": 7936 00:16:59.575 } 00:16:59.575 ] 00:16:59.575 } 00:16:59.575 } 00:16:59.575 }' 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:59.575 pt2' 00:16:59.575 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:59.836 [2024-11-26 21:24:17.832795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' ccc0ff46-3238-4af4-b580-a0382848a495 '!=' ccc0ff46-3238-4af4-b580-a0382848a495 ']' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.836 [2024-11-26 21:24:17.880535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.836 "name": "raid_bdev1", 00:16:59.836 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:16:59.836 "strip_size_kb": 0, 00:16:59.836 "state": "online", 00:16:59.836 "raid_level": "raid1", 00:16:59.836 "superblock": true, 00:16:59.836 "num_base_bdevs": 2, 00:16:59.836 "num_base_bdevs_discovered": 1, 00:16:59.836 "num_base_bdevs_operational": 1, 00:16:59.836 "base_bdevs_list": [ 00:16:59.836 { 00:16:59.836 "name": null, 00:16:59.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.836 "is_configured": false, 00:16:59.836 "data_offset": 0, 00:16:59.836 "data_size": 7936 00:16:59.836 }, 00:16:59.836 { 00:16:59.836 "name": "pt2", 00:16:59.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.836 "is_configured": true, 00:16:59.836 "data_offset": 256, 00:16:59.836 "data_size": 7936 00:16:59.836 } 00:16:59.836 ] 00:16:59.836 }' 00:16:59.836 21:24:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.837 21:24:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 [2024-11-26 21:24:18.311869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.407 [2024-11-26 21:24:18.311939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.407 [2024-11-26 21:24:18.312044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.407 [2024-11-26 21:24:18.312104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.407 [2024-11-26 21:24:18.312191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 [2024-11-26 21:24:18.387737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.407 [2024-11-26 21:24:18.387823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.407 [2024-11-26 21:24:18.387842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:00.407 [2024-11-26 21:24:18.387853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.407 [2024-11-26 21:24:18.390344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.407 [2024-11-26 21:24:18.390413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.407 [2024-11-26 21:24:18.390508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.407 [2024-11-26 21:24:18.390569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.407 [2024-11-26 21:24:18.390695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:00.407 [2024-11-26 21:24:18.390734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:00.407 [2024-11-26 21:24:18.390979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:00.407 [2024-11-26 21:24:18.391173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:00.407 [2024-11-26 21:24:18.391211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:00.407 [2024-11-26 21:24:18.391396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.407 pt2 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.407 "name": "raid_bdev1", 00:17:00.407 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:17:00.407 "strip_size_kb": 0, 00:17:00.407 "state": "online", 00:17:00.407 "raid_level": "raid1", 00:17:00.407 "superblock": true, 00:17:00.407 "num_base_bdevs": 2, 00:17:00.407 "num_base_bdevs_discovered": 1, 00:17:00.407 "num_base_bdevs_operational": 1, 00:17:00.407 "base_bdevs_list": [ 00:17:00.407 { 00:17:00.407 "name": null, 00:17:00.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.407 "is_configured": false, 00:17:00.407 "data_offset": 256, 00:17:00.407 "data_size": 7936 00:17:00.407 }, 00:17:00.407 { 00:17:00.407 "name": "pt2", 00:17:00.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.407 "is_configured": true, 00:17:00.407 "data_offset": 256, 00:17:00.407 "data_size": 7936 00:17:00.407 } 00:17:00.407 ] 00:17:00.407 }' 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.407 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.680 [2024-11-26 21:24:18.810950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.680 [2024-11-26 21:24:18.811023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.680 [2024-11-26 21:24:18.811094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.680 [2024-11-26 21:24:18.811150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.680 [2024-11-26 21:24:18.811181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.680 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.950 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.950 [2024-11-26 21:24:18.874869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.950 [2024-11-26 21:24:18.874918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.950 [2024-11-26 21:24:18.874934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:00.950 [2024-11-26 21:24:18.874942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.950 [2024-11-26 21:24:18.877393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.950 [2024-11-26 21:24:18.877425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.950 [2024-11-26 21:24:18.877497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:00.950 [2024-11-26 21:24:18.877537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.951 [2024-11-26 21:24:18.877675] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:00.951 [2024-11-26 21:24:18.877686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.951 [2024-11-26 21:24:18.877702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:00.951 [2024-11-26 21:24:18.877767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.951 [2024-11-26 21:24:18.877840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:00.951 [2024-11-26 21:24:18.877850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:00.951 [2024-11-26 21:24:18.878103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:00.951 [2024-11-26 21:24:18.878265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:00.951 [2024-11-26 21:24:18.878279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:00.951 [2024-11-26 21:24:18.878426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.951 pt1 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.951 "name": "raid_bdev1", 00:17:00.951 "uuid": "ccc0ff46-3238-4af4-b580-a0382848a495", 00:17:00.951 "strip_size_kb": 0, 00:17:00.951 "state": "online", 00:17:00.951 "raid_level": "raid1", 00:17:00.951 "superblock": true, 00:17:00.951 "num_base_bdevs": 2, 00:17:00.951 "num_base_bdevs_discovered": 1, 00:17:00.951 "num_base_bdevs_operational": 1, 00:17:00.951 "base_bdevs_list": [ 00:17:00.951 { 00:17:00.951 "name": null, 00:17:00.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.951 "is_configured": false, 00:17:00.951 "data_offset": 256, 00:17:00.951 "data_size": 7936 00:17:00.951 }, 00:17:00.951 { 00:17:00.951 "name": "pt2", 00:17:00.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.951 "is_configured": true, 00:17:00.951 "data_offset": 256, 00:17:00.951 "data_size": 7936 00:17:00.951 } 00:17:00.951 ] 00:17:00.951 }' 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.951 21:24:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.227 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.227 [2024-11-26 21:24:19.374242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' ccc0ff46-3238-4af4-b580-a0382848a495 '!=' ccc0ff46-3238-4af4-b580-a0382848a495 ']' 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85977 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85977 ']' 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85977 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85977 00:17:01.498 killing process with pid 85977 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85977' 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85977 00:17:01.498 [2024-11-26 21:24:19.456136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.498 [2024-11-26 21:24:19.456253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.498 [2024-11-26 21:24:19.456302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.498 [2024-11-26 21:24:19.456317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:01.498 21:24:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85977 00:17:01.757 [2024-11-26 21:24:19.668682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.696 ************************************ 00:17:02.696 END TEST raid_superblock_test_4k 00:17:02.696 ************************************ 00:17:02.696 21:24:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:02.696 00:17:02.696 real 0m5.958s 00:17:02.696 user 0m8.801s 00:17:02.696 sys 0m1.157s 00:17:02.696 21:24:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.696 21:24:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.955 21:24:20 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:02.955 21:24:20 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:02.955 21:24:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:02.955 21:24:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.955 21:24:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.955 ************************************ 00:17:02.955 START TEST raid_rebuild_test_sb_4k 00:17:02.955 ************************************ 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86302 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86302 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86302 ']' 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.955 21:24:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:02.955 Zero copy mechanism will not be used. 00:17:02.955 [2024-11-26 21:24:21.010199] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:02.955 [2024-11-26 21:24:21.010307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86302 ] 00:17:03.214 [2024-11-26 21:24:21.180883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.214 [2024-11-26 21:24:21.302132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.473 [2024-11-26 21:24:21.530494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.473 [2024-11-26 21:24:21.530557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.731 BaseBdev1_malloc 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:03.731 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.732 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.732 [2024-11-26 21:24:21.877286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:03.732 [2024-11-26 21:24:21.877346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.732 [2024-11-26 21:24:21.877370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.732 [2024-11-26 21:24:21.877382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.732 [2024-11-26 21:24:21.879621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.732 [2024-11-26 21:24:21.879655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.732 BaseBdev1 00:17:03.732 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.732 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.732 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:03.732 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.732 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 BaseBdev2_malloc 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 [2024-11-26 21:24:21.940008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:03.992 [2024-11-26 21:24:21.940062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.992 [2024-11-26 21:24:21.940085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:03.992 [2024-11-26 21:24:21.940096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.992 [2024-11-26 21:24:21.942352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.992 [2024-11-26 21:24:21.942383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:03.992 BaseBdev2 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 21:24:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 spare_malloc 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 spare_delay 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 [2024-11-26 21:24:22.039720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.992 [2024-11-26 21:24:22.039769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.992 [2024-11-26 21:24:22.039787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:03.992 [2024-11-26 21:24:22.039800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.992 [2024-11-26 21:24:22.042143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.992 [2024-11-26 21:24:22.042176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.992 spare 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 [2024-11-26 21:24:22.051762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.992 [2024-11-26 21:24:22.053793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.992 [2024-11-26 21:24:22.053983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:03.992 [2024-11-26 21:24:22.054000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.992 [2024-11-26 21:24:22.054229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:03.992 [2024-11-26 21:24:22.054398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:03.992 [2024-11-26 21:24:22.054411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:03.992 [2024-11-26 21:24:22.054552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.992 "name": "raid_bdev1", 00:17:03.992 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:03.992 "strip_size_kb": 0, 00:17:03.992 "state": "online", 00:17:03.992 "raid_level": "raid1", 00:17:03.992 "superblock": true, 00:17:03.992 "num_base_bdevs": 2, 00:17:03.992 "num_base_bdevs_discovered": 2, 00:17:03.992 "num_base_bdevs_operational": 2, 00:17:03.992 "base_bdevs_list": [ 00:17:03.992 { 00:17:03.992 "name": "BaseBdev1", 00:17:03.992 "uuid": "a27955f9-8442-5cff-a7aa-3d8680c45db0", 00:17:03.992 "is_configured": true, 00:17:03.992 "data_offset": 256, 00:17:03.992 "data_size": 7936 00:17:03.992 }, 00:17:03.992 { 00:17:03.992 "name": "BaseBdev2", 00:17:03.992 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:03.992 "is_configured": true, 00:17:03.992 "data_offset": 256, 00:17:03.992 "data_size": 7936 00:17:03.992 } 00:17:03.992 ] 00:17:03.992 }' 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.992 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:04.561 [2024-11-26 21:24:22.515179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.561 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:04.820 [2024-11-26 21:24:22.766521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:04.820 /dev/nbd0 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.820 1+0 records in 00:17:04.820 1+0 records out 00:17:04.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429657 s, 9.5 MB/s 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:04.820 21:24:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:05.388 7936+0 records in 00:17:05.388 7936+0 records out 00:17:05.388 32505856 bytes (33 MB, 31 MiB) copied, 0.644346 s, 50.4 MB/s 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.388 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.648 [2024-11-26 21:24:23.694207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.648 [2024-11-26 21:24:23.710510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.648 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.649 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.649 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.649 "name": "raid_bdev1", 00:17:05.649 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:05.649 "strip_size_kb": 0, 00:17:05.649 "state": "online", 00:17:05.649 "raid_level": "raid1", 00:17:05.649 "superblock": true, 00:17:05.649 "num_base_bdevs": 2, 00:17:05.649 "num_base_bdevs_discovered": 1, 00:17:05.649 "num_base_bdevs_operational": 1, 00:17:05.649 "base_bdevs_list": [ 00:17:05.649 { 00:17:05.649 "name": null, 00:17:05.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.649 "is_configured": false, 00:17:05.649 "data_offset": 0, 00:17:05.649 "data_size": 7936 00:17:05.649 }, 00:17:05.649 { 00:17:05.649 "name": "BaseBdev2", 00:17:05.649 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:05.649 "is_configured": true, 00:17:05.649 "data_offset": 256, 00:17:05.649 "data_size": 7936 00:17:05.649 } 00:17:05.649 ] 00:17:05.649 }' 00:17:05.649 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.649 21:24:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.218 21:24:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.218 21:24:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.218 21:24:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.218 [2024-11-26 21:24:24.153758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.218 [2024-11-26 21:24:24.172842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:06.218 21:24:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.218 21:24:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.218 [2024-11-26 21:24:24.174898] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.158 "name": "raid_bdev1", 00:17:07.158 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:07.158 "strip_size_kb": 0, 00:17:07.158 "state": "online", 00:17:07.158 "raid_level": "raid1", 00:17:07.158 "superblock": true, 00:17:07.158 "num_base_bdevs": 2, 00:17:07.158 "num_base_bdevs_discovered": 2, 00:17:07.158 "num_base_bdevs_operational": 2, 00:17:07.158 "process": { 00:17:07.158 "type": "rebuild", 00:17:07.158 "target": "spare", 00:17:07.158 "progress": { 00:17:07.158 "blocks": 2560, 00:17:07.158 "percent": 32 00:17:07.158 } 00:17:07.158 }, 00:17:07.158 "base_bdevs_list": [ 00:17:07.158 { 00:17:07.158 "name": "spare", 00:17:07.158 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:07.158 "is_configured": true, 00:17:07.158 "data_offset": 256, 00:17:07.158 "data_size": 7936 00:17:07.158 }, 00:17:07.158 { 00:17:07.158 "name": "BaseBdev2", 00:17:07.158 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:07.158 "is_configured": true, 00:17:07.158 "data_offset": 256, 00:17:07.158 "data_size": 7936 00:17:07.158 } 00:17:07.158 ] 00:17:07.158 }' 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.158 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.418 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.418 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 [2024-11-26 21:24:25.333894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.419 [2024-11-26 21:24:25.383455] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.419 [2024-11-26 21:24:25.383516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.419 [2024-11-26 21:24:25.383530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.419 [2024-11-26 21:24:25.383540] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.419 "name": "raid_bdev1", 00:17:07.419 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:07.419 "strip_size_kb": 0, 00:17:07.419 "state": "online", 00:17:07.419 "raid_level": "raid1", 00:17:07.419 "superblock": true, 00:17:07.419 "num_base_bdevs": 2, 00:17:07.419 "num_base_bdevs_discovered": 1, 00:17:07.419 "num_base_bdevs_operational": 1, 00:17:07.419 "base_bdevs_list": [ 00:17:07.419 { 00:17:07.419 "name": null, 00:17:07.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.419 "is_configured": false, 00:17:07.419 "data_offset": 0, 00:17:07.419 "data_size": 7936 00:17:07.419 }, 00:17:07.419 { 00:17:07.419 "name": "BaseBdev2", 00:17:07.419 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:07.419 "is_configured": true, 00:17:07.419 "data_offset": 256, 00:17:07.419 "data_size": 7936 00:17:07.419 } 00:17:07.419 ] 00:17:07.419 }' 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.419 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.989 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.990 "name": "raid_bdev1", 00:17:07.990 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:07.990 "strip_size_kb": 0, 00:17:07.990 "state": "online", 00:17:07.990 "raid_level": "raid1", 00:17:07.990 "superblock": true, 00:17:07.990 "num_base_bdevs": 2, 00:17:07.990 "num_base_bdevs_discovered": 1, 00:17:07.990 "num_base_bdevs_operational": 1, 00:17:07.990 "base_bdevs_list": [ 00:17:07.990 { 00:17:07.990 "name": null, 00:17:07.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.990 "is_configured": false, 00:17:07.990 "data_offset": 0, 00:17:07.990 "data_size": 7936 00:17:07.990 }, 00:17:07.990 { 00:17:07.990 "name": "BaseBdev2", 00:17:07.990 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:07.990 "is_configured": true, 00:17:07.990 "data_offset": 256, 00:17:07.990 "data_size": 7936 00:17:07.990 } 00:17:07.990 ] 00:17:07.990 }' 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.990 [2024-11-26 21:24:25.959253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.990 [2024-11-26 21:24:25.976123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.990 21:24:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:07.990 [2024-11-26 21:24:25.978210] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.931 21:24:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.931 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.931 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.931 "name": "raid_bdev1", 00:17:08.931 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:08.931 "strip_size_kb": 0, 00:17:08.931 "state": "online", 00:17:08.931 "raid_level": "raid1", 00:17:08.931 "superblock": true, 00:17:08.931 "num_base_bdevs": 2, 00:17:08.931 "num_base_bdevs_discovered": 2, 00:17:08.931 "num_base_bdevs_operational": 2, 00:17:08.931 "process": { 00:17:08.931 "type": "rebuild", 00:17:08.931 "target": "spare", 00:17:08.931 "progress": { 00:17:08.931 "blocks": 2560, 00:17:08.931 "percent": 32 00:17:08.931 } 00:17:08.931 }, 00:17:08.931 "base_bdevs_list": [ 00:17:08.931 { 00:17:08.931 "name": "spare", 00:17:08.931 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:08.931 "is_configured": true, 00:17:08.931 "data_offset": 256, 00:17:08.931 "data_size": 7936 00:17:08.931 }, 00:17:08.931 { 00:17:08.931 "name": "BaseBdev2", 00:17:08.931 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:08.931 "is_configured": true, 00:17:08.931 "data_offset": 256, 00:17:08.931 "data_size": 7936 00:17:08.931 } 00:17:08.931 ] 00:17:08.931 }' 00:17:08.931 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.931 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:09.191 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.191 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.192 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.192 "name": "raid_bdev1", 00:17:09.192 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:09.192 "strip_size_kb": 0, 00:17:09.192 "state": "online", 00:17:09.192 "raid_level": "raid1", 00:17:09.192 "superblock": true, 00:17:09.192 "num_base_bdevs": 2, 00:17:09.192 "num_base_bdevs_discovered": 2, 00:17:09.192 "num_base_bdevs_operational": 2, 00:17:09.192 "process": { 00:17:09.192 "type": "rebuild", 00:17:09.192 "target": "spare", 00:17:09.192 "progress": { 00:17:09.192 "blocks": 2816, 00:17:09.192 "percent": 35 00:17:09.192 } 00:17:09.192 }, 00:17:09.192 "base_bdevs_list": [ 00:17:09.192 { 00:17:09.192 "name": "spare", 00:17:09.192 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:09.192 "is_configured": true, 00:17:09.192 "data_offset": 256, 00:17:09.192 "data_size": 7936 00:17:09.192 }, 00:17:09.192 { 00:17:09.192 "name": "BaseBdev2", 00:17:09.192 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:09.192 "is_configured": true, 00:17:09.192 "data_offset": 256, 00:17:09.192 "data_size": 7936 00:17:09.192 } 00:17:09.192 ] 00:17:09.192 }' 00:17:09.192 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.192 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.192 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.192 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.192 21:24:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.132 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.393 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.393 "name": "raid_bdev1", 00:17:10.393 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:10.393 "strip_size_kb": 0, 00:17:10.393 "state": "online", 00:17:10.393 "raid_level": "raid1", 00:17:10.393 "superblock": true, 00:17:10.393 "num_base_bdevs": 2, 00:17:10.393 "num_base_bdevs_discovered": 2, 00:17:10.393 "num_base_bdevs_operational": 2, 00:17:10.393 "process": { 00:17:10.393 "type": "rebuild", 00:17:10.393 "target": "spare", 00:17:10.393 "progress": { 00:17:10.393 "blocks": 5632, 00:17:10.393 "percent": 70 00:17:10.393 } 00:17:10.393 }, 00:17:10.393 "base_bdevs_list": [ 00:17:10.393 { 00:17:10.393 "name": "spare", 00:17:10.393 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:10.393 "is_configured": true, 00:17:10.393 "data_offset": 256, 00:17:10.393 "data_size": 7936 00:17:10.393 }, 00:17:10.393 { 00:17:10.393 "name": "BaseBdev2", 00:17:10.393 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:10.393 "is_configured": true, 00:17:10.393 "data_offset": 256, 00:17:10.393 "data_size": 7936 00:17:10.393 } 00:17:10.393 ] 00:17:10.393 }' 00:17:10.393 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.393 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.393 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.393 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.393 21:24:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.963 [2024-11-26 21:24:29.099675] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:10.963 [2024-11-26 21:24:29.099747] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:10.963 [2024-11-26 21:24:29.099844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.532 "name": "raid_bdev1", 00:17:11.532 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:11.532 "strip_size_kb": 0, 00:17:11.532 "state": "online", 00:17:11.532 "raid_level": "raid1", 00:17:11.532 "superblock": true, 00:17:11.532 "num_base_bdevs": 2, 00:17:11.532 "num_base_bdevs_discovered": 2, 00:17:11.532 "num_base_bdevs_operational": 2, 00:17:11.532 "base_bdevs_list": [ 00:17:11.532 { 00:17:11.532 "name": "spare", 00:17:11.532 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:11.532 "is_configured": true, 00:17:11.532 "data_offset": 256, 00:17:11.532 "data_size": 7936 00:17:11.532 }, 00:17:11.532 { 00:17:11.532 "name": "BaseBdev2", 00:17:11.532 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:11.532 "is_configured": true, 00:17:11.532 "data_offset": 256, 00:17:11.532 "data_size": 7936 00:17:11.532 } 00:17:11.532 ] 00:17:11.532 }' 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.532 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.532 "name": "raid_bdev1", 00:17:11.532 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:11.532 "strip_size_kb": 0, 00:17:11.532 "state": "online", 00:17:11.532 "raid_level": "raid1", 00:17:11.532 "superblock": true, 00:17:11.532 "num_base_bdevs": 2, 00:17:11.532 "num_base_bdevs_discovered": 2, 00:17:11.532 "num_base_bdevs_operational": 2, 00:17:11.532 "base_bdevs_list": [ 00:17:11.532 { 00:17:11.533 "name": "spare", 00:17:11.533 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:11.533 "is_configured": true, 00:17:11.533 "data_offset": 256, 00:17:11.533 "data_size": 7936 00:17:11.533 }, 00:17:11.533 { 00:17:11.533 "name": "BaseBdev2", 00:17:11.533 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:11.533 "is_configured": true, 00:17:11.533 "data_offset": 256, 00:17:11.533 "data_size": 7936 00:17:11.533 } 00:17:11.533 ] 00:17:11.533 }' 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.533 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.792 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.792 "name": "raid_bdev1", 00:17:11.792 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:11.792 "strip_size_kb": 0, 00:17:11.792 "state": "online", 00:17:11.792 "raid_level": "raid1", 00:17:11.792 "superblock": true, 00:17:11.792 "num_base_bdevs": 2, 00:17:11.792 "num_base_bdevs_discovered": 2, 00:17:11.792 "num_base_bdevs_operational": 2, 00:17:11.792 "base_bdevs_list": [ 00:17:11.792 { 00:17:11.792 "name": "spare", 00:17:11.792 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:11.792 "is_configured": true, 00:17:11.792 "data_offset": 256, 00:17:11.792 "data_size": 7936 00:17:11.792 }, 00:17:11.792 { 00:17:11.792 "name": "BaseBdev2", 00:17:11.792 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:11.792 "is_configured": true, 00:17:11.792 "data_offset": 256, 00:17:11.792 "data_size": 7936 00:17:11.792 } 00:17:11.792 ] 00:17:11.792 }' 00:17:11.792 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.792 21:24:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.053 [2024-11-26 21:24:30.082685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.053 [2024-11-26 21:24:30.082719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.053 [2024-11-26 21:24:30.082801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.053 [2024-11-26 21:24:30.082869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.053 [2024-11-26 21:24:30.082881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.053 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:12.313 /dev/nbd0 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.313 1+0 records in 00:17:12.313 1+0 records out 00:17:12.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399226 s, 10.3 MB/s 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.313 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:12.573 /dev/nbd1 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.573 1+0 records in 00:17:12.573 1+0 records out 00:17:12.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427931 s, 9.6 MB/s 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.573 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.833 21:24:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.094 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.355 [2024-11-26 21:24:31.262746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.355 [2024-11-26 21:24:31.262801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.355 [2024-11-26 21:24:31.262830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:13.355 [2024-11-26 21:24:31.262839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.355 [2024-11-26 21:24:31.265505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.355 [2024-11-26 21:24:31.265549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.355 [2024-11-26 21:24:31.265643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.355 [2024-11-26 21:24:31.265701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.355 [2024-11-26 21:24:31.265862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.355 spare 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.355 [2024-11-26 21:24:31.365763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:13.355 [2024-11-26 21:24:31.365793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.355 [2024-11-26 21:24:31.366083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:13.355 [2024-11-26 21:24:31.366275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:13.355 [2024-11-26 21:24:31.366294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:13.355 [2024-11-26 21:24:31.366461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.355 "name": "raid_bdev1", 00:17:13.355 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:13.355 "strip_size_kb": 0, 00:17:13.355 "state": "online", 00:17:13.355 "raid_level": "raid1", 00:17:13.355 "superblock": true, 00:17:13.355 "num_base_bdevs": 2, 00:17:13.355 "num_base_bdevs_discovered": 2, 00:17:13.355 "num_base_bdevs_operational": 2, 00:17:13.355 "base_bdevs_list": [ 00:17:13.355 { 00:17:13.355 "name": "spare", 00:17:13.355 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:13.355 "is_configured": true, 00:17:13.355 "data_offset": 256, 00:17:13.355 "data_size": 7936 00:17:13.355 }, 00:17:13.355 { 00:17:13.355 "name": "BaseBdev2", 00:17:13.355 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:13.355 "is_configured": true, 00:17:13.355 "data_offset": 256, 00:17:13.355 "data_size": 7936 00:17:13.355 } 00:17:13.355 ] 00:17:13.355 }' 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.355 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.927 "name": "raid_bdev1", 00:17:13.927 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:13.927 "strip_size_kb": 0, 00:17:13.927 "state": "online", 00:17:13.927 "raid_level": "raid1", 00:17:13.927 "superblock": true, 00:17:13.927 "num_base_bdevs": 2, 00:17:13.927 "num_base_bdevs_discovered": 2, 00:17:13.927 "num_base_bdevs_operational": 2, 00:17:13.927 "base_bdevs_list": [ 00:17:13.927 { 00:17:13.927 "name": "spare", 00:17:13.927 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:13.927 "is_configured": true, 00:17:13.927 "data_offset": 256, 00:17:13.927 "data_size": 7936 00:17:13.927 }, 00:17:13.927 { 00:17:13.927 "name": "BaseBdev2", 00:17:13.927 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:13.927 "is_configured": true, 00:17:13.927 "data_offset": 256, 00:17:13.927 "data_size": 7936 00:17:13.927 } 00:17:13.927 ] 00:17:13.927 }' 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 [2024-11-26 21:24:31.993536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.927 21:24:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.927 "name": "raid_bdev1", 00:17:13.927 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:13.927 "strip_size_kb": 0, 00:17:13.927 "state": "online", 00:17:13.927 "raid_level": "raid1", 00:17:13.927 "superblock": true, 00:17:13.927 "num_base_bdevs": 2, 00:17:13.927 "num_base_bdevs_discovered": 1, 00:17:13.927 "num_base_bdevs_operational": 1, 00:17:13.927 "base_bdevs_list": [ 00:17:13.927 { 00:17:13.927 "name": null, 00:17:13.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.927 "is_configured": false, 00:17:13.927 "data_offset": 0, 00:17:13.927 "data_size": 7936 00:17:13.927 }, 00:17:13.927 { 00:17:13.927 "name": "BaseBdev2", 00:17:13.927 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:13.927 "is_configured": true, 00:17:13.927 "data_offset": 256, 00:17:13.927 "data_size": 7936 00:17:13.927 } 00:17:13.927 ] 00:17:13.927 }' 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.927 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.498 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.498 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.498 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.498 [2024-11-26 21:24:32.468728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.498 [2024-11-26 21:24:32.468894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.498 [2024-11-26 21:24:32.468915] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.498 [2024-11-26 21:24:32.468945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.498 [2024-11-26 21:24:32.486224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:14.498 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.498 21:24:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.498 [2024-11-26 21:24:32.488271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.438 "name": "raid_bdev1", 00:17:15.438 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:15.438 "strip_size_kb": 0, 00:17:15.438 "state": "online", 00:17:15.438 "raid_level": "raid1", 00:17:15.438 "superblock": true, 00:17:15.438 "num_base_bdevs": 2, 00:17:15.438 "num_base_bdevs_discovered": 2, 00:17:15.438 "num_base_bdevs_operational": 2, 00:17:15.438 "process": { 00:17:15.438 "type": "rebuild", 00:17:15.438 "target": "spare", 00:17:15.438 "progress": { 00:17:15.438 "blocks": 2560, 00:17:15.438 "percent": 32 00:17:15.438 } 00:17:15.438 }, 00:17:15.438 "base_bdevs_list": [ 00:17:15.438 { 00:17:15.438 "name": "spare", 00:17:15.438 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:15.438 "is_configured": true, 00:17:15.438 "data_offset": 256, 00:17:15.438 "data_size": 7936 00:17:15.438 }, 00:17:15.438 { 00:17:15.438 "name": "BaseBdev2", 00:17:15.438 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:15.438 "is_configured": true, 00:17:15.438 "data_offset": 256, 00:17:15.438 "data_size": 7936 00:17:15.438 } 00:17:15.438 ] 00:17:15.438 }' 00:17:15.438 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.699 [2024-11-26 21:24:33.643779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.699 [2024-11-26 21:24:33.696441] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.699 [2024-11-26 21:24:33.696496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.699 [2024-11-26 21:24:33.696511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.699 [2024-11-26 21:24:33.696520] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.699 "name": "raid_bdev1", 00:17:15.699 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:15.699 "strip_size_kb": 0, 00:17:15.699 "state": "online", 00:17:15.699 "raid_level": "raid1", 00:17:15.699 "superblock": true, 00:17:15.699 "num_base_bdevs": 2, 00:17:15.699 "num_base_bdevs_discovered": 1, 00:17:15.699 "num_base_bdevs_operational": 1, 00:17:15.699 "base_bdevs_list": [ 00:17:15.699 { 00:17:15.699 "name": null, 00:17:15.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.699 "is_configured": false, 00:17:15.699 "data_offset": 0, 00:17:15.699 "data_size": 7936 00:17:15.699 }, 00:17:15.699 { 00:17:15.699 "name": "BaseBdev2", 00:17:15.699 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:15.699 "is_configured": true, 00:17:15.699 "data_offset": 256, 00:17:15.699 "data_size": 7936 00:17:15.699 } 00:17:15.699 ] 00:17:15.699 }' 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.699 21:24:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.269 21:24:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.269 21:24:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.269 21:24:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.269 [2024-11-26 21:24:34.137984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.269 [2024-11-26 21:24:34.138040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.269 [2024-11-26 21:24:34.138062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:16.269 [2024-11-26 21:24:34.138075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.269 [2024-11-26 21:24:34.138578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.269 [2024-11-26 21:24:34.138607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.269 [2024-11-26 21:24:34.138702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.269 [2024-11-26 21:24:34.138717] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.269 [2024-11-26 21:24:34.138729] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.269 [2024-11-26 21:24:34.138755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.269 [2024-11-26 21:24:34.154930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:16.269 spare 00:17:16.269 21:24:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.269 21:24:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:16.269 [2024-11-26 21:24:34.157085] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.210 "name": "raid_bdev1", 00:17:17.210 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:17.210 "strip_size_kb": 0, 00:17:17.210 "state": "online", 00:17:17.210 "raid_level": "raid1", 00:17:17.210 "superblock": true, 00:17:17.210 "num_base_bdevs": 2, 00:17:17.210 "num_base_bdevs_discovered": 2, 00:17:17.210 "num_base_bdevs_operational": 2, 00:17:17.210 "process": { 00:17:17.210 "type": "rebuild", 00:17:17.210 "target": "spare", 00:17:17.210 "progress": { 00:17:17.210 "blocks": 2560, 00:17:17.210 "percent": 32 00:17:17.210 } 00:17:17.210 }, 00:17:17.210 "base_bdevs_list": [ 00:17:17.210 { 00:17:17.210 "name": "spare", 00:17:17.210 "uuid": "ffc74c2c-48e0-52f8-99c4-d76eb821f7af", 00:17:17.210 "is_configured": true, 00:17:17.210 "data_offset": 256, 00:17:17.210 "data_size": 7936 00:17:17.210 }, 00:17:17.210 { 00:17:17.210 "name": "BaseBdev2", 00:17:17.210 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:17.210 "is_configured": true, 00:17:17.210 "data_offset": 256, 00:17:17.210 "data_size": 7936 00:17:17.210 } 00:17:17.210 ] 00:17:17.210 }' 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.210 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.210 [2024-11-26 21:24:35.324582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.469 [2024-11-26 21:24:35.365284] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.469 [2024-11-26 21:24:35.365335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.469 [2024-11-26 21:24:35.365353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.469 [2024-11-26 21:24:35.365361] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.469 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.470 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.470 "name": "raid_bdev1", 00:17:17.470 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:17.470 "strip_size_kb": 0, 00:17:17.470 "state": "online", 00:17:17.470 "raid_level": "raid1", 00:17:17.470 "superblock": true, 00:17:17.470 "num_base_bdevs": 2, 00:17:17.470 "num_base_bdevs_discovered": 1, 00:17:17.470 "num_base_bdevs_operational": 1, 00:17:17.470 "base_bdevs_list": [ 00:17:17.470 { 00:17:17.470 "name": null, 00:17:17.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.470 "is_configured": false, 00:17:17.470 "data_offset": 0, 00:17:17.470 "data_size": 7936 00:17:17.470 }, 00:17:17.470 { 00:17:17.470 "name": "BaseBdev2", 00:17:17.470 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:17.470 "is_configured": true, 00:17:17.470 "data_offset": 256, 00:17:17.470 "data_size": 7936 00:17:17.470 } 00:17:17.470 ] 00:17:17.470 }' 00:17:17.470 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.470 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.729 "name": "raid_bdev1", 00:17:17.729 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:17.729 "strip_size_kb": 0, 00:17:17.729 "state": "online", 00:17:17.729 "raid_level": "raid1", 00:17:17.729 "superblock": true, 00:17:17.729 "num_base_bdevs": 2, 00:17:17.729 "num_base_bdevs_discovered": 1, 00:17:17.729 "num_base_bdevs_operational": 1, 00:17:17.729 "base_bdevs_list": [ 00:17:17.729 { 00:17:17.729 "name": null, 00:17:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.729 "is_configured": false, 00:17:17.729 "data_offset": 0, 00:17:17.729 "data_size": 7936 00:17:17.729 }, 00:17:17.729 { 00:17:17.729 "name": "BaseBdev2", 00:17:17.729 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:17.729 "is_configured": true, 00:17:17.729 "data_offset": 256, 00:17:17.729 "data_size": 7936 00:17:17.729 } 00:17:17.729 ] 00:17:17.729 }' 00:17:17.729 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 [2024-11-26 21:24:35.945395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.990 [2024-11-26 21:24:35.945449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.990 [2024-11-26 21:24:35.945480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:17.990 [2024-11-26 21:24:35.945501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.990 [2024-11-26 21:24:35.946016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.990 [2024-11-26 21:24:35.946034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.990 [2024-11-26 21:24:35.946113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.990 [2024-11-26 21:24:35.946126] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.990 [2024-11-26 21:24:35.946138] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.990 [2024-11-26 21:24:35.946149] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.990 BaseBdev1 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.990 21:24:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.928 "name": "raid_bdev1", 00:17:18.928 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:18.928 "strip_size_kb": 0, 00:17:18.928 "state": "online", 00:17:18.928 "raid_level": "raid1", 00:17:18.928 "superblock": true, 00:17:18.928 "num_base_bdevs": 2, 00:17:18.928 "num_base_bdevs_discovered": 1, 00:17:18.928 "num_base_bdevs_operational": 1, 00:17:18.928 "base_bdevs_list": [ 00:17:18.928 { 00:17:18.928 "name": null, 00:17:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.928 "is_configured": false, 00:17:18.928 "data_offset": 0, 00:17:18.928 "data_size": 7936 00:17:18.928 }, 00:17:18.928 { 00:17:18.928 "name": "BaseBdev2", 00:17:18.928 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:18.928 "is_configured": true, 00:17:18.928 "data_offset": 256, 00:17:18.928 "data_size": 7936 00:17:18.928 } 00:17:18.928 ] 00:17:18.928 }' 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.928 21:24:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.498 "name": "raid_bdev1", 00:17:19.498 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:19.498 "strip_size_kb": 0, 00:17:19.498 "state": "online", 00:17:19.498 "raid_level": "raid1", 00:17:19.498 "superblock": true, 00:17:19.498 "num_base_bdevs": 2, 00:17:19.498 "num_base_bdevs_discovered": 1, 00:17:19.498 "num_base_bdevs_operational": 1, 00:17:19.498 "base_bdevs_list": [ 00:17:19.498 { 00:17:19.498 "name": null, 00:17:19.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.498 "is_configured": false, 00:17:19.498 "data_offset": 0, 00:17:19.498 "data_size": 7936 00:17:19.498 }, 00:17:19.498 { 00:17:19.498 "name": "BaseBdev2", 00:17:19.498 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:19.498 "is_configured": true, 00:17:19.498 "data_offset": 256, 00:17:19.498 "data_size": 7936 00:17:19.498 } 00:17:19.498 ] 00:17:19.498 }' 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.498 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.498 [2024-11-26 21:24:37.562650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.498 [2024-11-26 21:24:37.562816] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.499 [2024-11-26 21:24:37.562832] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.499 request: 00:17:19.499 { 00:17:19.499 "base_bdev": "BaseBdev1", 00:17:19.499 "raid_bdev": "raid_bdev1", 00:17:19.499 "method": "bdev_raid_add_base_bdev", 00:17:19.499 "req_id": 1 00:17:19.499 } 00:17:19.499 Got JSON-RPC error response 00:17:19.499 response: 00:17:19.499 { 00:17:19.499 "code": -22, 00:17:19.499 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.499 } 00:17:19.499 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.499 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:19.499 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.499 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.499 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.499 21:24:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.437 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.700 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.700 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.700 "name": "raid_bdev1", 00:17:20.700 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:20.700 "strip_size_kb": 0, 00:17:20.700 "state": "online", 00:17:20.700 "raid_level": "raid1", 00:17:20.700 "superblock": true, 00:17:20.700 "num_base_bdevs": 2, 00:17:20.700 "num_base_bdevs_discovered": 1, 00:17:20.700 "num_base_bdevs_operational": 1, 00:17:20.700 "base_bdevs_list": [ 00:17:20.700 { 00:17:20.700 "name": null, 00:17:20.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.700 "is_configured": false, 00:17:20.700 "data_offset": 0, 00:17:20.700 "data_size": 7936 00:17:20.700 }, 00:17:20.700 { 00:17:20.700 "name": "BaseBdev2", 00:17:20.700 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:20.700 "is_configured": true, 00:17:20.700 "data_offset": 256, 00:17:20.700 "data_size": 7936 00:17:20.700 } 00:17:20.700 ] 00:17:20.700 }' 00:17:20.700 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.700 21:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.967 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.968 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.968 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.968 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.968 "name": "raid_bdev1", 00:17:20.968 "uuid": "7606d28c-71f2-4d9b-8933-3bbab345ff5e", 00:17:20.968 "strip_size_kb": 0, 00:17:20.968 "state": "online", 00:17:20.968 "raid_level": "raid1", 00:17:20.968 "superblock": true, 00:17:20.968 "num_base_bdevs": 2, 00:17:20.968 "num_base_bdevs_discovered": 1, 00:17:20.968 "num_base_bdevs_operational": 1, 00:17:20.968 "base_bdevs_list": [ 00:17:20.968 { 00:17:20.968 "name": null, 00:17:20.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.968 "is_configured": false, 00:17:20.968 "data_offset": 0, 00:17:20.968 "data_size": 7936 00:17:20.968 }, 00:17:20.968 { 00:17:20.968 "name": "BaseBdev2", 00:17:20.968 "uuid": "7458b91c-f002-57bd-88a4-30fff29bfe94", 00:17:20.968 "is_configured": true, 00:17:20.968 "data_offset": 256, 00:17:20.968 "data_size": 7936 00:17:20.968 } 00:17:20.968 ] 00:17:20.968 }' 00:17:20.968 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.968 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.968 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86302 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86302 ']' 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86302 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86302 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.250 killing process with pid 86302 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86302' 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86302 00:17:21.250 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.250 00:17:21.250 Latency(us) 00:17:21.250 [2024-11-26T21:24:39.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.250 [2024-11-26T21:24:39.406Z] =================================================================================================================== 00:17:21.250 [2024-11-26T21:24:39.406Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.250 [2024-11-26 21:24:39.166054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.250 [2024-11-26 21:24:39.166179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.250 [2024-11-26 21:24:39.166230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.250 [2024-11-26 21:24:39.166243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.250 21:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86302 00:17:21.521 [2024-11-26 21:24:39.479981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.902 21:24:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:22.902 00:17:22.902 real 0m19.723s 00:17:22.902 user 0m25.490s 00:17:22.902 sys 0m2.636s 00:17:22.902 21:24:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.902 21:24:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.902 ************************************ 00:17:22.902 END TEST raid_rebuild_test_sb_4k 00:17:22.903 ************************************ 00:17:22.903 21:24:40 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:22.903 21:24:40 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:22.903 21:24:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:22.903 21:24:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.903 21:24:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.903 ************************************ 00:17:22.903 START TEST raid_state_function_test_sb_md_separate 00:17:22.903 ************************************ 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86988 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86988' 00:17:22.903 Process raid pid: 86988 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86988 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86988 ']' 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.903 21:24:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.903 [2024-11-26 21:24:40.814228] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:22.903 [2024-11-26 21:24:40.814339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.903 [2024-11-26 21:24:40.991051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.163 [2024-11-26 21:24:41.121679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.423 [2024-11-26 21:24:41.349501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.423 [2024-11-26 21:24:41.349538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.683 [2024-11-26 21:24:41.632810] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.683 [2024-11-26 21:24:41.632863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.683 [2024-11-26 21:24:41.632874] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.683 [2024-11-26 21:24:41.632884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.683 "name": "Existed_Raid", 00:17:23.683 "uuid": "d41c9610-a211-4dcd-9513-6f032ea0437b", 00:17:23.683 "strip_size_kb": 0, 00:17:23.683 "state": "configuring", 00:17:23.683 "raid_level": "raid1", 00:17:23.683 "superblock": true, 00:17:23.683 "num_base_bdevs": 2, 00:17:23.683 "num_base_bdevs_discovered": 0, 00:17:23.683 "num_base_bdevs_operational": 2, 00:17:23.683 "base_bdevs_list": [ 00:17:23.683 { 00:17:23.683 "name": "BaseBdev1", 00:17:23.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.683 "is_configured": false, 00:17:23.683 "data_offset": 0, 00:17:23.683 "data_size": 0 00:17:23.683 }, 00:17:23.683 { 00:17:23.683 "name": "BaseBdev2", 00:17:23.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.683 "is_configured": false, 00:17:23.683 "data_offset": 0, 00:17:23.683 "data_size": 0 00:17:23.683 } 00:17:23.683 ] 00:17:23.683 }' 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.683 21:24:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.943 [2024-11-26 21:24:42.064170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.943 [2024-11-26 21:24:42.064223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.943 [2024-11-26 21:24:42.076165] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.943 [2024-11-26 21:24:42.076199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.943 [2024-11-26 21:24:42.076208] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.943 [2024-11-26 21:24:42.076221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.943 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.203 [2024-11-26 21:24:42.131849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.203 BaseBdev1 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.203 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.204 [ 00:17:24.204 { 00:17:24.204 "name": "BaseBdev1", 00:17:24.204 "aliases": [ 00:17:24.204 "b9434677-842d-4917-8485-06ee57ecc545" 00:17:24.204 ], 00:17:24.204 "product_name": "Malloc disk", 00:17:24.204 "block_size": 4096, 00:17:24.204 "num_blocks": 8192, 00:17:24.204 "uuid": "b9434677-842d-4917-8485-06ee57ecc545", 00:17:24.204 "md_size": 32, 00:17:24.204 "md_interleave": false, 00:17:24.204 "dif_type": 0, 00:17:24.204 "assigned_rate_limits": { 00:17:24.204 "rw_ios_per_sec": 0, 00:17:24.204 "rw_mbytes_per_sec": 0, 00:17:24.204 "r_mbytes_per_sec": 0, 00:17:24.204 "w_mbytes_per_sec": 0 00:17:24.204 }, 00:17:24.204 "claimed": true, 00:17:24.204 "claim_type": "exclusive_write", 00:17:24.204 "zoned": false, 00:17:24.204 "supported_io_types": { 00:17:24.204 "read": true, 00:17:24.204 "write": true, 00:17:24.204 "unmap": true, 00:17:24.204 "flush": true, 00:17:24.204 "reset": true, 00:17:24.204 "nvme_admin": false, 00:17:24.204 "nvme_io": false, 00:17:24.204 "nvme_io_md": false, 00:17:24.204 "write_zeroes": true, 00:17:24.204 "zcopy": true, 00:17:24.204 "get_zone_info": false, 00:17:24.204 "zone_management": false, 00:17:24.204 "zone_append": false, 00:17:24.204 "compare": false, 00:17:24.204 "compare_and_write": false, 00:17:24.204 "abort": true, 00:17:24.204 "seek_hole": false, 00:17:24.204 "seek_data": false, 00:17:24.204 "copy": true, 00:17:24.204 "nvme_iov_md": false 00:17:24.204 }, 00:17:24.204 "memory_domains": [ 00:17:24.204 { 00:17:24.204 "dma_device_id": "system", 00:17:24.204 "dma_device_type": 1 00:17:24.204 }, 00:17:24.204 { 00:17:24.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.204 "dma_device_type": 2 00:17:24.204 } 00:17:24.204 ], 00:17:24.204 "driver_specific": {} 00:17:24.204 } 00:17:24.204 ] 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.204 "name": "Existed_Raid", 00:17:24.204 "uuid": "a44550d8-8c6d-46e4-9b8a-c2a64f500e3b", 00:17:24.204 "strip_size_kb": 0, 00:17:24.204 "state": "configuring", 00:17:24.204 "raid_level": "raid1", 00:17:24.204 "superblock": true, 00:17:24.204 "num_base_bdevs": 2, 00:17:24.204 "num_base_bdevs_discovered": 1, 00:17:24.204 "num_base_bdevs_operational": 2, 00:17:24.204 "base_bdevs_list": [ 00:17:24.204 { 00:17:24.204 "name": "BaseBdev1", 00:17:24.204 "uuid": "b9434677-842d-4917-8485-06ee57ecc545", 00:17:24.204 "is_configured": true, 00:17:24.204 "data_offset": 256, 00:17:24.204 "data_size": 7936 00:17:24.204 }, 00:17:24.204 { 00:17:24.204 "name": "BaseBdev2", 00:17:24.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.204 "is_configured": false, 00:17:24.204 "data_offset": 0, 00:17:24.204 "data_size": 0 00:17:24.204 } 00:17:24.204 ] 00:17:24.204 }' 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.204 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.464 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.464 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.464 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.464 [2024-11-26 21:24:42.615055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.464 [2024-11-26 21:24:42.615098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.724 [2024-11-26 21:24:42.627079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.724 [2024-11-26 21:24:42.629111] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.724 [2024-11-26 21:24:42.629148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.724 "name": "Existed_Raid", 00:17:24.724 "uuid": "5d426852-c276-4f42-8a66-86a964c6df1b", 00:17:24.724 "strip_size_kb": 0, 00:17:24.724 "state": "configuring", 00:17:24.724 "raid_level": "raid1", 00:17:24.724 "superblock": true, 00:17:24.724 "num_base_bdevs": 2, 00:17:24.724 "num_base_bdevs_discovered": 1, 00:17:24.724 "num_base_bdevs_operational": 2, 00:17:24.724 "base_bdevs_list": [ 00:17:24.724 { 00:17:24.724 "name": "BaseBdev1", 00:17:24.724 "uuid": "b9434677-842d-4917-8485-06ee57ecc545", 00:17:24.724 "is_configured": true, 00:17:24.724 "data_offset": 256, 00:17:24.724 "data_size": 7936 00:17:24.724 }, 00:17:24.724 { 00:17:24.724 "name": "BaseBdev2", 00:17:24.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.724 "is_configured": false, 00:17:24.724 "data_offset": 0, 00:17:24.724 "data_size": 0 00:17:24.724 } 00:17:24.724 ] 00:17:24.724 }' 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.724 21:24:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.984 [2024-11-26 21:24:43.114870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.984 [2024-11-26 21:24:43.115135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:24.984 [2024-11-26 21:24:43.115154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.984 [2024-11-26 21:24:43.115247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:24.984 [2024-11-26 21:24:43.115408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:24.984 [2024-11-26 21:24:43.115426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:24.984 [2024-11-26 21:24:43.115529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.984 BaseBdev2 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.984 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.245 [ 00:17:25.245 { 00:17:25.245 "name": "BaseBdev2", 00:17:25.245 "aliases": [ 00:17:25.245 "318399c4-7148-4b8f-97a7-d8dd41fbb318" 00:17:25.245 ], 00:17:25.245 "product_name": "Malloc disk", 00:17:25.245 "block_size": 4096, 00:17:25.245 "num_blocks": 8192, 00:17:25.245 "uuid": "318399c4-7148-4b8f-97a7-d8dd41fbb318", 00:17:25.245 "md_size": 32, 00:17:25.245 "md_interleave": false, 00:17:25.245 "dif_type": 0, 00:17:25.245 "assigned_rate_limits": { 00:17:25.245 "rw_ios_per_sec": 0, 00:17:25.245 "rw_mbytes_per_sec": 0, 00:17:25.245 "r_mbytes_per_sec": 0, 00:17:25.245 "w_mbytes_per_sec": 0 00:17:25.245 }, 00:17:25.245 "claimed": true, 00:17:25.245 "claim_type": "exclusive_write", 00:17:25.245 "zoned": false, 00:17:25.245 "supported_io_types": { 00:17:25.245 "read": true, 00:17:25.245 "write": true, 00:17:25.245 "unmap": true, 00:17:25.245 "flush": true, 00:17:25.245 "reset": true, 00:17:25.245 "nvme_admin": false, 00:17:25.245 "nvme_io": false, 00:17:25.245 "nvme_io_md": false, 00:17:25.245 "write_zeroes": true, 00:17:25.245 "zcopy": true, 00:17:25.245 "get_zone_info": false, 00:17:25.245 "zone_management": false, 00:17:25.245 "zone_append": false, 00:17:25.245 "compare": false, 00:17:25.245 "compare_and_write": false, 00:17:25.245 "abort": true, 00:17:25.245 "seek_hole": false, 00:17:25.245 "seek_data": false, 00:17:25.245 "copy": true, 00:17:25.245 "nvme_iov_md": false 00:17:25.245 }, 00:17:25.245 "memory_domains": [ 00:17:25.245 { 00:17:25.245 "dma_device_id": "system", 00:17:25.245 "dma_device_type": 1 00:17:25.245 }, 00:17:25.245 { 00:17:25.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.245 "dma_device_type": 2 00:17:25.245 } 00:17:25.245 ], 00:17:25.245 "driver_specific": {} 00:17:25.245 } 00:17:25.245 ] 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.245 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.245 "name": "Existed_Raid", 00:17:25.245 "uuid": "5d426852-c276-4f42-8a66-86a964c6df1b", 00:17:25.245 "strip_size_kb": 0, 00:17:25.245 "state": "online", 00:17:25.245 "raid_level": "raid1", 00:17:25.245 "superblock": true, 00:17:25.245 "num_base_bdevs": 2, 00:17:25.245 "num_base_bdevs_discovered": 2, 00:17:25.245 "num_base_bdevs_operational": 2, 00:17:25.245 "base_bdevs_list": [ 00:17:25.245 { 00:17:25.245 "name": "BaseBdev1", 00:17:25.245 "uuid": "b9434677-842d-4917-8485-06ee57ecc545", 00:17:25.245 "is_configured": true, 00:17:25.245 "data_offset": 256, 00:17:25.245 "data_size": 7936 00:17:25.245 }, 00:17:25.245 { 00:17:25.245 "name": "BaseBdev2", 00:17:25.245 "uuid": "318399c4-7148-4b8f-97a7-d8dd41fbb318", 00:17:25.245 "is_configured": true, 00:17:25.245 "data_offset": 256, 00:17:25.245 "data_size": 7936 00:17:25.246 } 00:17:25.246 ] 00:17:25.246 }' 00:17:25.246 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.246 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.506 [2024-11-26 21:24:43.606321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.506 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.506 "name": "Existed_Raid", 00:17:25.506 "aliases": [ 00:17:25.506 "5d426852-c276-4f42-8a66-86a964c6df1b" 00:17:25.506 ], 00:17:25.506 "product_name": "Raid Volume", 00:17:25.506 "block_size": 4096, 00:17:25.506 "num_blocks": 7936, 00:17:25.506 "uuid": "5d426852-c276-4f42-8a66-86a964c6df1b", 00:17:25.506 "md_size": 32, 00:17:25.506 "md_interleave": false, 00:17:25.506 "dif_type": 0, 00:17:25.506 "assigned_rate_limits": { 00:17:25.506 "rw_ios_per_sec": 0, 00:17:25.506 "rw_mbytes_per_sec": 0, 00:17:25.506 "r_mbytes_per_sec": 0, 00:17:25.506 "w_mbytes_per_sec": 0 00:17:25.506 }, 00:17:25.506 "claimed": false, 00:17:25.506 "zoned": false, 00:17:25.506 "supported_io_types": { 00:17:25.506 "read": true, 00:17:25.506 "write": true, 00:17:25.506 "unmap": false, 00:17:25.506 "flush": false, 00:17:25.506 "reset": true, 00:17:25.506 "nvme_admin": false, 00:17:25.506 "nvme_io": false, 00:17:25.506 "nvme_io_md": false, 00:17:25.506 "write_zeroes": true, 00:17:25.506 "zcopy": false, 00:17:25.506 "get_zone_info": false, 00:17:25.506 "zone_management": false, 00:17:25.506 "zone_append": false, 00:17:25.506 "compare": false, 00:17:25.506 "compare_and_write": false, 00:17:25.506 "abort": false, 00:17:25.506 "seek_hole": false, 00:17:25.506 "seek_data": false, 00:17:25.507 "copy": false, 00:17:25.507 "nvme_iov_md": false 00:17:25.507 }, 00:17:25.507 "memory_domains": [ 00:17:25.507 { 00:17:25.507 "dma_device_id": "system", 00:17:25.507 "dma_device_type": 1 00:17:25.507 }, 00:17:25.507 { 00:17:25.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.507 "dma_device_type": 2 00:17:25.507 }, 00:17:25.507 { 00:17:25.507 "dma_device_id": "system", 00:17:25.507 "dma_device_type": 1 00:17:25.507 }, 00:17:25.507 { 00:17:25.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.507 "dma_device_type": 2 00:17:25.507 } 00:17:25.507 ], 00:17:25.507 "driver_specific": { 00:17:25.507 "raid": { 00:17:25.507 "uuid": "5d426852-c276-4f42-8a66-86a964c6df1b", 00:17:25.507 "strip_size_kb": 0, 00:17:25.507 "state": "online", 00:17:25.507 "raid_level": "raid1", 00:17:25.507 "superblock": true, 00:17:25.507 "num_base_bdevs": 2, 00:17:25.507 "num_base_bdevs_discovered": 2, 00:17:25.507 "num_base_bdevs_operational": 2, 00:17:25.507 "base_bdevs_list": [ 00:17:25.507 { 00:17:25.507 "name": "BaseBdev1", 00:17:25.507 "uuid": "b9434677-842d-4917-8485-06ee57ecc545", 00:17:25.507 "is_configured": true, 00:17:25.507 "data_offset": 256, 00:17:25.507 "data_size": 7936 00:17:25.507 }, 00:17:25.507 { 00:17:25.507 "name": "BaseBdev2", 00:17:25.507 "uuid": "318399c4-7148-4b8f-97a7-d8dd41fbb318", 00:17:25.507 "is_configured": true, 00:17:25.507 "data_offset": 256, 00:17:25.507 "data_size": 7936 00:17:25.507 } 00:17:25.507 ] 00:17:25.507 } 00:17:25.507 } 00:17:25.507 }' 00:17:25.507 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:25.767 BaseBdev2' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.767 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.767 [2024-11-26 21:24:43.833714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.027 "name": "Existed_Raid", 00:17:26.027 "uuid": "5d426852-c276-4f42-8a66-86a964c6df1b", 00:17:26.027 "strip_size_kb": 0, 00:17:26.027 "state": "online", 00:17:26.027 "raid_level": "raid1", 00:17:26.027 "superblock": true, 00:17:26.027 "num_base_bdevs": 2, 00:17:26.027 "num_base_bdevs_discovered": 1, 00:17:26.027 "num_base_bdevs_operational": 1, 00:17:26.027 "base_bdevs_list": [ 00:17:26.027 { 00:17:26.027 "name": null, 00:17:26.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.027 "is_configured": false, 00:17:26.027 "data_offset": 0, 00:17:26.027 "data_size": 7936 00:17:26.027 }, 00:17:26.027 { 00:17:26.027 "name": "BaseBdev2", 00:17:26.027 "uuid": "318399c4-7148-4b8f-97a7-d8dd41fbb318", 00:17:26.027 "is_configured": true, 00:17:26.027 "data_offset": 256, 00:17:26.027 "data_size": 7936 00:17:26.027 } 00:17:26.027 ] 00:17:26.027 }' 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.027 21:24:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.547 [2024-11-26 21:24:44.471835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:26.547 [2024-11-26 21:24:44.471970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.547 [2024-11-26 21:24:44.577388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.547 [2024-11-26 21:24:44.577456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.547 [2024-11-26 21:24:44.577470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.547 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86988 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86988 ']' 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86988 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86988 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.548 killing process with pid 86988 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86988' 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86988 00:17:26.548 [2024-11-26 21:24:44.671384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.548 21:24:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86988 00:17:26.548 [2024-11-26 21:24:44.689424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.930 21:24:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:27.930 00:17:27.930 real 0m5.154s 00:17:27.930 user 0m7.237s 00:17:27.930 sys 0m0.973s 00:17:27.930 21:24:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.930 21:24:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.930 ************************************ 00:17:27.930 END TEST raid_state_function_test_sb_md_separate 00:17:27.930 ************************************ 00:17:27.930 21:24:45 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:27.930 21:24:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:27.930 21:24:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.930 21:24:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.930 ************************************ 00:17:27.930 START TEST raid_superblock_test_md_separate 00:17:27.930 ************************************ 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87240 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87240 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87240 ']' 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.930 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.931 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.931 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.931 21:24:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 [2024-11-26 21:24:46.049589] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:27.931 [2024-11-26 21:24:46.049696] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87240 ] 00:17:28.191 [2024-11-26 21:24:46.225035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.452 [2024-11-26 21:24:46.367346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.452 [2024-11-26 21:24:46.601329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.452 [2024-11-26 21:24:46.601384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.713 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.973 malloc1 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.973 [2024-11-26 21:24:46.917255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:28.973 [2024-11-26 21:24:46.917308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.973 [2024-11-26 21:24:46.917330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.973 [2024-11-26 21:24:46.917340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.973 [2024-11-26 21:24:46.919405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.973 [2024-11-26 21:24:46.919437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:28.973 pt1 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:28.973 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.974 malloc2 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.974 [2024-11-26 21:24:46.978666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.974 [2024-11-26 21:24:46.978714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.974 [2024-11-26 21:24:46.978737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.974 [2024-11-26 21:24:46.978746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.974 [2024-11-26 21:24:46.980896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.974 [2024-11-26 21:24:46.980926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.974 pt2 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.974 [2024-11-26 21:24:46.990674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:28.974 [2024-11-26 21:24:46.992743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.974 [2024-11-26 21:24:46.992922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.974 [2024-11-26 21:24:46.992937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.974 [2024-11-26 21:24:46.993022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:28.974 [2024-11-26 21:24:46.993147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.974 [2024-11-26 21:24:46.993164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.974 [2024-11-26 21:24:46.993267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.974 21:24:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.974 "name": "raid_bdev1", 00:17:28.974 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:28.974 "strip_size_kb": 0, 00:17:28.974 "state": "online", 00:17:28.974 "raid_level": "raid1", 00:17:28.974 "superblock": true, 00:17:28.974 "num_base_bdevs": 2, 00:17:28.974 "num_base_bdevs_discovered": 2, 00:17:28.974 "num_base_bdevs_operational": 2, 00:17:28.974 "base_bdevs_list": [ 00:17:28.974 { 00:17:28.974 "name": "pt1", 00:17:28.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.974 "is_configured": true, 00:17:28.974 "data_offset": 256, 00:17:28.974 "data_size": 7936 00:17:28.974 }, 00:17:28.974 { 00:17:28.974 "name": "pt2", 00:17:28.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.974 "is_configured": true, 00:17:28.974 "data_offset": 256, 00:17:28.974 "data_size": 7936 00:17:28.974 } 00:17:28.974 ] 00:17:28.974 }' 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.974 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.234 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.234 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:29.234 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.234 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.495 [2024-11-26 21:24:47.398214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.495 "name": "raid_bdev1", 00:17:29.495 "aliases": [ 00:17:29.495 "b2c4616b-4d3c-4583-8850-0d82603d5858" 00:17:29.495 ], 00:17:29.495 "product_name": "Raid Volume", 00:17:29.495 "block_size": 4096, 00:17:29.495 "num_blocks": 7936, 00:17:29.495 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:29.495 "md_size": 32, 00:17:29.495 "md_interleave": false, 00:17:29.495 "dif_type": 0, 00:17:29.495 "assigned_rate_limits": { 00:17:29.495 "rw_ios_per_sec": 0, 00:17:29.495 "rw_mbytes_per_sec": 0, 00:17:29.495 "r_mbytes_per_sec": 0, 00:17:29.495 "w_mbytes_per_sec": 0 00:17:29.495 }, 00:17:29.495 "claimed": false, 00:17:29.495 "zoned": false, 00:17:29.495 "supported_io_types": { 00:17:29.495 "read": true, 00:17:29.495 "write": true, 00:17:29.495 "unmap": false, 00:17:29.495 "flush": false, 00:17:29.495 "reset": true, 00:17:29.495 "nvme_admin": false, 00:17:29.495 "nvme_io": false, 00:17:29.495 "nvme_io_md": false, 00:17:29.495 "write_zeroes": true, 00:17:29.495 "zcopy": false, 00:17:29.495 "get_zone_info": false, 00:17:29.495 "zone_management": false, 00:17:29.495 "zone_append": false, 00:17:29.495 "compare": false, 00:17:29.495 "compare_and_write": false, 00:17:29.495 "abort": false, 00:17:29.495 "seek_hole": false, 00:17:29.495 "seek_data": false, 00:17:29.495 "copy": false, 00:17:29.495 "nvme_iov_md": false 00:17:29.495 }, 00:17:29.495 "memory_domains": [ 00:17:29.495 { 00:17:29.495 "dma_device_id": "system", 00:17:29.495 "dma_device_type": 1 00:17:29.495 }, 00:17:29.495 { 00:17:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.495 "dma_device_type": 2 00:17:29.495 }, 00:17:29.495 { 00:17:29.495 "dma_device_id": "system", 00:17:29.495 "dma_device_type": 1 00:17:29.495 }, 00:17:29.495 { 00:17:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.495 "dma_device_type": 2 00:17:29.495 } 00:17:29.495 ], 00:17:29.495 "driver_specific": { 00:17:29.495 "raid": { 00:17:29.495 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:29.495 "strip_size_kb": 0, 00:17:29.495 "state": "online", 00:17:29.495 "raid_level": "raid1", 00:17:29.495 "superblock": true, 00:17:29.495 "num_base_bdevs": 2, 00:17:29.495 "num_base_bdevs_discovered": 2, 00:17:29.495 "num_base_bdevs_operational": 2, 00:17:29.495 "base_bdevs_list": [ 00:17:29.495 { 00:17:29.495 "name": "pt1", 00:17:29.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.495 "is_configured": true, 00:17:29.495 "data_offset": 256, 00:17:29.495 "data_size": 7936 00:17:29.495 }, 00:17:29.495 { 00:17:29.495 "name": "pt2", 00:17:29.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.495 "is_configured": true, 00:17:29.495 "data_offset": 256, 00:17:29.495 "data_size": 7936 00:17:29.495 } 00:17:29.495 ] 00:17:29.495 } 00:17:29.495 } 00:17:29.495 }' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:29.495 pt2' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.495 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:29.496 [2024-11-26 21:24:47.617787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.496 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.756 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2c4616b-4d3c-4583-8850-0d82603d5858 00:17:29.756 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b2c4616b-4d3c-4583-8850-0d82603d5858 ']' 00:17:29.756 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:29.756 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.756 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.756 [2024-11-26 21:24:47.665466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.756 [2024-11-26 21:24:47.665489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.756 [2024-11-26 21:24:47.665572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.756 [2024-11-26 21:24:47.665623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.756 [2024-11-26 21:24:47.665635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:29.756 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 [2024-11-26 21:24:47.801233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:29.757 [2024-11-26 21:24:47.803270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:29.757 [2024-11-26 21:24:47.803346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:29.757 [2024-11-26 21:24:47.803384] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:29.757 [2024-11-26 21:24:47.803398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.757 [2024-11-26 21:24:47.803408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:29.757 request: 00:17:29.757 { 00:17:29.757 "name": "raid_bdev1", 00:17:29.757 "raid_level": "raid1", 00:17:29.757 "base_bdevs": [ 00:17:29.757 "malloc1", 00:17:29.757 "malloc2" 00:17:29.757 ], 00:17:29.757 "superblock": false, 00:17:29.757 "method": "bdev_raid_create", 00:17:29.757 "req_id": 1 00:17:29.757 } 00:17:29.757 Got JSON-RPC error response 00:17:29.757 response: 00:17:29.757 { 00:17:29.757 "code": -17, 00:17:29.757 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:29.757 } 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.757 [2024-11-26 21:24:47.857124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.757 [2024-11-26 21:24:47.857165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.757 [2024-11-26 21:24:47.857178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:29.757 [2024-11-26 21:24:47.857189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.757 [2024-11-26 21:24:47.859296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.757 [2024-11-26 21:24:47.859326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.757 [2024-11-26 21:24:47.859365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:29.757 [2024-11-26 21:24:47.859418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.757 pt1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.757 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.758 "name": "raid_bdev1", 00:17:29.758 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:29.758 "strip_size_kb": 0, 00:17:29.758 "state": "configuring", 00:17:29.758 "raid_level": "raid1", 00:17:29.758 "superblock": true, 00:17:29.758 "num_base_bdevs": 2, 00:17:29.758 "num_base_bdevs_discovered": 1, 00:17:29.758 "num_base_bdevs_operational": 2, 00:17:29.758 "base_bdevs_list": [ 00:17:29.758 { 00:17:29.758 "name": "pt1", 00:17:29.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.758 "is_configured": true, 00:17:29.758 "data_offset": 256, 00:17:29.758 "data_size": 7936 00:17:29.758 }, 00:17:29.758 { 00:17:29.758 "name": null, 00:17:29.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.758 "is_configured": false, 00:17:29.758 "data_offset": 256, 00:17:29.758 "data_size": 7936 00:17:29.758 } 00:17:29.758 ] 00:17:29.758 }' 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.758 21:24:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.328 [2024-11-26 21:24:48.228546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.328 [2024-11-26 21:24:48.228600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.328 [2024-11-26 21:24:48.228618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:30.328 [2024-11-26 21:24:48.228628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.328 [2024-11-26 21:24:48.228807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.328 [2024-11-26 21:24:48.228823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.328 [2024-11-26 21:24:48.228863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.328 [2024-11-26 21:24:48.228883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.328 [2024-11-26 21:24:48.228991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:30.328 [2024-11-26 21:24:48.229004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.328 [2024-11-26 21:24:48.229076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:30.328 [2024-11-26 21:24:48.229188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:30.328 [2024-11-26 21:24:48.229196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:30.328 [2024-11-26 21:24:48.229283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.328 pt2 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.328 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.328 "name": "raid_bdev1", 00:17:30.328 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:30.328 "strip_size_kb": 0, 00:17:30.328 "state": "online", 00:17:30.328 "raid_level": "raid1", 00:17:30.328 "superblock": true, 00:17:30.328 "num_base_bdevs": 2, 00:17:30.328 "num_base_bdevs_discovered": 2, 00:17:30.328 "num_base_bdevs_operational": 2, 00:17:30.328 "base_bdevs_list": [ 00:17:30.328 { 00:17:30.328 "name": "pt1", 00:17:30.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.329 "is_configured": true, 00:17:30.329 "data_offset": 256, 00:17:30.329 "data_size": 7936 00:17:30.329 }, 00:17:30.329 { 00:17:30.329 "name": "pt2", 00:17:30.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.329 "is_configured": true, 00:17:30.329 "data_offset": 256, 00:17:30.329 "data_size": 7936 00:17:30.329 } 00:17:30.329 ] 00:17:30.329 }' 00:17:30.329 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.329 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.589 [2024-11-26 21:24:48.600232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.589 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.589 "name": "raid_bdev1", 00:17:30.589 "aliases": [ 00:17:30.589 "b2c4616b-4d3c-4583-8850-0d82603d5858" 00:17:30.589 ], 00:17:30.589 "product_name": "Raid Volume", 00:17:30.589 "block_size": 4096, 00:17:30.589 "num_blocks": 7936, 00:17:30.589 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:30.589 "md_size": 32, 00:17:30.589 "md_interleave": false, 00:17:30.589 "dif_type": 0, 00:17:30.589 "assigned_rate_limits": { 00:17:30.589 "rw_ios_per_sec": 0, 00:17:30.589 "rw_mbytes_per_sec": 0, 00:17:30.589 "r_mbytes_per_sec": 0, 00:17:30.589 "w_mbytes_per_sec": 0 00:17:30.589 }, 00:17:30.589 "claimed": false, 00:17:30.589 "zoned": false, 00:17:30.589 "supported_io_types": { 00:17:30.589 "read": true, 00:17:30.589 "write": true, 00:17:30.589 "unmap": false, 00:17:30.589 "flush": false, 00:17:30.589 "reset": true, 00:17:30.589 "nvme_admin": false, 00:17:30.589 "nvme_io": false, 00:17:30.589 "nvme_io_md": false, 00:17:30.589 "write_zeroes": true, 00:17:30.589 "zcopy": false, 00:17:30.589 "get_zone_info": false, 00:17:30.589 "zone_management": false, 00:17:30.589 "zone_append": false, 00:17:30.589 "compare": false, 00:17:30.589 "compare_and_write": false, 00:17:30.589 "abort": false, 00:17:30.589 "seek_hole": false, 00:17:30.589 "seek_data": false, 00:17:30.589 "copy": false, 00:17:30.589 "nvme_iov_md": false 00:17:30.589 }, 00:17:30.589 "memory_domains": [ 00:17:30.589 { 00:17:30.589 "dma_device_id": "system", 00:17:30.589 "dma_device_type": 1 00:17:30.589 }, 00:17:30.589 { 00:17:30.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.589 "dma_device_type": 2 00:17:30.589 }, 00:17:30.589 { 00:17:30.589 "dma_device_id": "system", 00:17:30.589 "dma_device_type": 1 00:17:30.589 }, 00:17:30.589 { 00:17:30.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.589 "dma_device_type": 2 00:17:30.589 } 00:17:30.589 ], 00:17:30.589 "driver_specific": { 00:17:30.589 "raid": { 00:17:30.589 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:30.589 "strip_size_kb": 0, 00:17:30.589 "state": "online", 00:17:30.589 "raid_level": "raid1", 00:17:30.589 "superblock": true, 00:17:30.589 "num_base_bdevs": 2, 00:17:30.589 "num_base_bdevs_discovered": 2, 00:17:30.589 "num_base_bdevs_operational": 2, 00:17:30.589 "base_bdevs_list": [ 00:17:30.589 { 00:17:30.589 "name": "pt1", 00:17:30.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.589 "is_configured": true, 00:17:30.589 "data_offset": 256, 00:17:30.589 "data_size": 7936 00:17:30.590 }, 00:17:30.590 { 00:17:30.590 "name": "pt2", 00:17:30.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.590 "is_configured": true, 00:17:30.590 "data_offset": 256, 00:17:30.590 "data_size": 7936 00:17:30.590 } 00:17:30.590 ] 00:17:30.590 } 00:17:30.590 } 00:17:30.590 }' 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:30.590 pt2' 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.590 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.850 [2024-11-26 21:24:48.847787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b2c4616b-4d3c-4583-8850-0d82603d5858 '!=' b2c4616b-4d3c-4583-8850-0d82603d5858 ']' 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.850 [2024-11-26 21:24:48.891490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.850 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.851 "name": "raid_bdev1", 00:17:30.851 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:30.851 "strip_size_kb": 0, 00:17:30.851 "state": "online", 00:17:30.851 "raid_level": "raid1", 00:17:30.851 "superblock": true, 00:17:30.851 "num_base_bdevs": 2, 00:17:30.851 "num_base_bdevs_discovered": 1, 00:17:30.851 "num_base_bdevs_operational": 1, 00:17:30.851 "base_bdevs_list": [ 00:17:30.851 { 00:17:30.851 "name": null, 00:17:30.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.851 "is_configured": false, 00:17:30.851 "data_offset": 0, 00:17:30.851 "data_size": 7936 00:17:30.851 }, 00:17:30.851 { 00:17:30.851 "name": "pt2", 00:17:30.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.851 "is_configured": true, 00:17:30.851 "data_offset": 256, 00:17:30.851 "data_size": 7936 00:17:30.851 } 00:17:30.851 ] 00:17:30.851 }' 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.851 21:24:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.419 [2024-11-26 21:24:49.278891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.419 [2024-11-26 21:24:49.278951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.419 [2024-11-26 21:24:49.279032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.419 [2024-11-26 21:24:49.279082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.419 [2024-11-26 21:24:49.279115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.419 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.420 [2024-11-26 21:24:49.334805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.420 [2024-11-26 21:24:49.334851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.420 [2024-11-26 21:24:49.334864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:31.420 [2024-11-26 21:24:49.334875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.420 [2024-11-26 21:24:49.337185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.420 [2024-11-26 21:24:49.337222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.420 [2024-11-26 21:24:49.337266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.420 [2024-11-26 21:24:49.337318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.420 [2024-11-26 21:24:49.337405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:31.420 [2024-11-26 21:24:49.337418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.420 [2024-11-26 21:24:49.337488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:31.420 [2024-11-26 21:24:49.337623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:31.420 [2024-11-26 21:24:49.337631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:31.420 [2024-11-26 21:24:49.337708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.420 pt2 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.420 "name": "raid_bdev1", 00:17:31.420 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:31.420 "strip_size_kb": 0, 00:17:31.420 "state": "online", 00:17:31.420 "raid_level": "raid1", 00:17:31.420 "superblock": true, 00:17:31.420 "num_base_bdevs": 2, 00:17:31.420 "num_base_bdevs_discovered": 1, 00:17:31.420 "num_base_bdevs_operational": 1, 00:17:31.420 "base_bdevs_list": [ 00:17:31.420 { 00:17:31.420 "name": null, 00:17:31.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.420 "is_configured": false, 00:17:31.420 "data_offset": 256, 00:17:31.420 "data_size": 7936 00:17:31.420 }, 00:17:31.420 { 00:17:31.420 "name": "pt2", 00:17:31.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.420 "is_configured": true, 00:17:31.420 "data_offset": 256, 00:17:31.420 "data_size": 7936 00:17:31.420 } 00:17:31.420 ] 00:17:31.420 }' 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.420 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.679 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.680 [2024-11-26 21:24:49.794003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.680 [2024-11-26 21:24:49.794027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.680 [2024-11-26 21:24:49.794078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.680 [2024-11-26 21:24:49.794118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.680 [2024-11-26 21:24:49.794127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:31.680 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.939 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:31.939 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.940 [2024-11-26 21:24:49.857925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.940 [2024-11-26 21:24:49.857980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.940 [2024-11-26 21:24:49.857996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:31.940 [2024-11-26 21:24:49.858005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.940 [2024-11-26 21:24:49.860194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.940 [2024-11-26 21:24:49.860267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.940 [2024-11-26 21:24:49.860319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:31.940 [2024-11-26 21:24:49.860367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.940 [2024-11-26 21:24:49.860502] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:31.940 [2024-11-26 21:24:49.860512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.940 [2024-11-26 21:24:49.860528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:31.940 [2024-11-26 21:24:49.860603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.940 [2024-11-26 21:24:49.860667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:31.940 [2024-11-26 21:24:49.860674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.940 [2024-11-26 21:24:49.860730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.940 [2024-11-26 21:24:49.860825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:31.940 [2024-11-26 21:24:49.860835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:31.940 [2024-11-26 21:24:49.860919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.940 pt1 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.940 "name": "raid_bdev1", 00:17:31.940 "uuid": "b2c4616b-4d3c-4583-8850-0d82603d5858", 00:17:31.940 "strip_size_kb": 0, 00:17:31.940 "state": "online", 00:17:31.940 "raid_level": "raid1", 00:17:31.940 "superblock": true, 00:17:31.940 "num_base_bdevs": 2, 00:17:31.940 "num_base_bdevs_discovered": 1, 00:17:31.940 "num_base_bdevs_operational": 1, 00:17:31.940 "base_bdevs_list": [ 00:17:31.940 { 00:17:31.940 "name": null, 00:17:31.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.940 "is_configured": false, 00:17:31.940 "data_offset": 256, 00:17:31.940 "data_size": 7936 00:17:31.940 }, 00:17:31.940 { 00:17:31.940 "name": "pt2", 00:17:31.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.940 "is_configured": true, 00:17:31.940 "data_offset": 256, 00:17:31.940 "data_size": 7936 00:17:31.940 } 00:17:31.940 ] 00:17:31.940 }' 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.940 21:24:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:32.199 [2024-11-26 21:24:50.333305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.199 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b2c4616b-4d3c-4583-8850-0d82603d5858 '!=' b2c4616b-4d3c-4583-8850-0d82603d5858 ']' 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87240 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87240 ']' 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87240 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87240 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87240' 00:17:32.459 killing process with pid 87240 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87240 00:17:32.459 [2024-11-26 21:24:50.413968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.459 [2024-11-26 21:24:50.414084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.459 [2024-11-26 21:24:50.414150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.459 [2024-11-26 21:24:50.414202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:32.459 21:24:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87240 00:17:32.719 [2024-11-26 21:24:50.644649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.102 21:24:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:34.102 00:17:34.102 real 0m5.873s 00:17:34.102 user 0m8.579s 00:17:34.102 sys 0m1.189s 00:17:34.102 ************************************ 00:17:34.102 END TEST raid_superblock_test_md_separate 00:17:34.102 ************************************ 00:17:34.102 21:24:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.102 21:24:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.102 21:24:51 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:34.103 21:24:51 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:34.103 21:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:34.103 21:24:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.103 21:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.103 ************************************ 00:17:34.103 START TEST raid_rebuild_test_sb_md_separate 00:17:34.103 ************************************ 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87563 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87563 00:17:34.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87563 ']' 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.103 21:24:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.103 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:34.103 Zero copy mechanism will not be used. 00:17:34.103 [2024-11-26 21:24:52.004606] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:34.103 [2024-11-26 21:24:52.004724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87563 ] 00:17:34.103 [2024-11-26 21:24:52.178099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.363 [2024-11-26 21:24:52.311082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.623 [2024-11-26 21:24:52.540168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.623 [2024-11-26 21:24:52.540208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 BaseBdev1_malloc 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 [2024-11-26 21:24:52.890097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:34.883 [2024-11-26 21:24:52.890157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.883 [2024-11-26 21:24:52.890181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:34.883 [2024-11-26 21:24:52.890193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.883 [2024-11-26 21:24:52.892302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.883 [2024-11-26 21:24:52.892386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:34.883 BaseBdev1 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 BaseBdev2_malloc 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 [2024-11-26 21:24:52.950363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:34.883 [2024-11-26 21:24:52.950419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.883 [2024-11-26 21:24:52.950440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:34.883 [2024-11-26 21:24:52.950453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.883 [2024-11-26 21:24:52.952544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.883 [2024-11-26 21:24:52.952579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:34.883 BaseBdev2 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.883 21:24:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 spare_malloc 00:17:34.883 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.883 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:34.883 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.883 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 spare_delay 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 [2024-11-26 21:24:53.053053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.142 [2024-11-26 21:24:53.053110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.142 [2024-11-26 21:24:53.053131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.142 [2024-11-26 21:24:53.053142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.142 [2024-11-26 21:24:53.055270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.142 [2024-11-26 21:24:53.055307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.142 spare 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 [2024-11-26 21:24:53.065084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.142 [2024-11-26 21:24:53.067078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.142 [2024-11-26 21:24:53.067255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.142 [2024-11-26 21:24:53.067269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.142 [2024-11-26 21:24:53.067344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.142 [2024-11-26 21:24:53.067466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.142 [2024-11-26 21:24:53.067476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.142 [2024-11-26 21:24:53.067583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.142 "name": "raid_bdev1", 00:17:35.142 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:35.142 "strip_size_kb": 0, 00:17:35.142 "state": "online", 00:17:35.142 "raid_level": "raid1", 00:17:35.142 "superblock": true, 00:17:35.142 "num_base_bdevs": 2, 00:17:35.142 "num_base_bdevs_discovered": 2, 00:17:35.142 "num_base_bdevs_operational": 2, 00:17:35.142 "base_bdevs_list": [ 00:17:35.142 { 00:17:35.142 "name": "BaseBdev1", 00:17:35.142 "uuid": "de401ca1-0b64-5832-8d12-9452656cdc95", 00:17:35.142 "is_configured": true, 00:17:35.142 "data_offset": 256, 00:17:35.142 "data_size": 7936 00:17:35.142 }, 00:17:35.142 { 00:17:35.142 "name": "BaseBdev2", 00:17:35.142 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:35.142 "is_configured": true, 00:17:35.142 "data_offset": 256, 00:17:35.142 "data_size": 7936 00:17:35.142 } 00:17:35.142 ] 00:17:35.142 }' 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.142 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.403 [2024-11-26 21:24:53.508557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.403 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.666 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:35.666 [2024-11-26 21:24:53.783924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:35.666 /dev/nbd0 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.927 1+0 records in 00:17:35.927 1+0 records out 00:17:35.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547589 s, 7.5 MB/s 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:35.927 21:24:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:36.496 7936+0 records in 00:17:36.496 7936+0 records out 00:17:36.496 32505856 bytes (33 MB, 31 MiB) copied, 0.589393 s, 55.2 MB/s 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:36.496 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:36.756 [2024-11-26 21:24:54.669153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.756 [2024-11-26 21:24:54.689326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.756 "name": "raid_bdev1", 00:17:36.756 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:36.756 "strip_size_kb": 0, 00:17:36.756 "state": "online", 00:17:36.756 "raid_level": "raid1", 00:17:36.756 "superblock": true, 00:17:36.756 "num_base_bdevs": 2, 00:17:36.756 "num_base_bdevs_discovered": 1, 00:17:36.756 "num_base_bdevs_operational": 1, 00:17:36.756 "base_bdevs_list": [ 00:17:36.756 { 00:17:36.756 "name": null, 00:17:36.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.756 "is_configured": false, 00:17:36.756 "data_offset": 0, 00:17:36.756 "data_size": 7936 00:17:36.756 }, 00:17:36.756 { 00:17:36.756 "name": "BaseBdev2", 00:17:36.756 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:36.756 "is_configured": true, 00:17:36.756 "data_offset": 256, 00:17:36.756 "data_size": 7936 00:17:36.756 } 00:17:36.756 ] 00:17:36.756 }' 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.756 21:24:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.016 21:24:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.016 21:24:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.016 21:24:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.016 [2024-11-26 21:24:55.136535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.016 [2024-11-26 21:24:55.151122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:37.016 21:24:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.016 21:24:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:37.016 [2024-11-26 21:24:55.153194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.398 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.398 "name": "raid_bdev1", 00:17:38.398 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:38.398 "strip_size_kb": 0, 00:17:38.398 "state": "online", 00:17:38.398 "raid_level": "raid1", 00:17:38.398 "superblock": true, 00:17:38.398 "num_base_bdevs": 2, 00:17:38.398 "num_base_bdevs_discovered": 2, 00:17:38.398 "num_base_bdevs_operational": 2, 00:17:38.398 "process": { 00:17:38.398 "type": "rebuild", 00:17:38.398 "target": "spare", 00:17:38.398 "progress": { 00:17:38.398 "blocks": 2560, 00:17:38.399 "percent": 32 00:17:38.399 } 00:17:38.399 }, 00:17:38.399 "base_bdevs_list": [ 00:17:38.399 { 00:17:38.399 "name": "spare", 00:17:38.399 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:38.399 "is_configured": true, 00:17:38.399 "data_offset": 256, 00:17:38.399 "data_size": 7936 00:17:38.399 }, 00:17:38.399 { 00:17:38.399 "name": "BaseBdev2", 00:17:38.399 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:38.399 "is_configured": true, 00:17:38.399 "data_offset": 256, 00:17:38.399 "data_size": 7936 00:17:38.399 } 00:17:38.399 ] 00:17:38.399 }' 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.399 [2024-11-26 21:24:56.313110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.399 [2024-11-26 21:24:56.361889] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.399 [2024-11-26 21:24:56.361950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.399 [2024-11-26 21:24:56.361975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.399 [2024-11-26 21:24:56.361989] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.399 "name": "raid_bdev1", 00:17:38.399 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:38.399 "strip_size_kb": 0, 00:17:38.399 "state": "online", 00:17:38.399 "raid_level": "raid1", 00:17:38.399 "superblock": true, 00:17:38.399 "num_base_bdevs": 2, 00:17:38.399 "num_base_bdevs_discovered": 1, 00:17:38.399 "num_base_bdevs_operational": 1, 00:17:38.399 "base_bdevs_list": [ 00:17:38.399 { 00:17:38.399 "name": null, 00:17:38.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.399 "is_configured": false, 00:17:38.399 "data_offset": 0, 00:17:38.399 "data_size": 7936 00:17:38.399 }, 00:17:38.399 { 00:17:38.399 "name": "BaseBdev2", 00:17:38.399 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:38.399 "is_configured": true, 00:17:38.399 "data_offset": 256, 00:17:38.399 "data_size": 7936 00:17:38.399 } 00:17:38.399 ] 00:17:38.399 }' 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.399 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.659 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.919 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.919 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.919 "name": "raid_bdev1", 00:17:38.919 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:38.919 "strip_size_kb": 0, 00:17:38.919 "state": "online", 00:17:38.919 "raid_level": "raid1", 00:17:38.920 "superblock": true, 00:17:38.920 "num_base_bdevs": 2, 00:17:38.920 "num_base_bdevs_discovered": 1, 00:17:38.920 "num_base_bdevs_operational": 1, 00:17:38.920 "base_bdevs_list": [ 00:17:38.920 { 00:17:38.920 "name": null, 00:17:38.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.920 "is_configured": false, 00:17:38.920 "data_offset": 0, 00:17:38.920 "data_size": 7936 00:17:38.920 }, 00:17:38.920 { 00:17:38.920 "name": "BaseBdev2", 00:17:38.920 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:38.920 "is_configured": true, 00:17:38.920 "data_offset": 256, 00:17:38.920 "data_size": 7936 00:17:38.920 } 00:17:38.920 ] 00:17:38.920 }' 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.920 [2024-11-26 21:24:56.929303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.920 [2024-11-26 21:24:56.942768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.920 21:24:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.920 [2024-11-26 21:24:56.944934] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.860 21:24:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.860 "name": "raid_bdev1", 00:17:39.860 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:39.860 "strip_size_kb": 0, 00:17:39.860 "state": "online", 00:17:39.860 "raid_level": "raid1", 00:17:39.860 "superblock": true, 00:17:39.860 "num_base_bdevs": 2, 00:17:39.860 "num_base_bdevs_discovered": 2, 00:17:39.860 "num_base_bdevs_operational": 2, 00:17:39.860 "process": { 00:17:39.860 "type": "rebuild", 00:17:39.860 "target": "spare", 00:17:39.860 "progress": { 00:17:39.860 "blocks": 2560, 00:17:39.860 "percent": 32 00:17:39.860 } 00:17:39.860 }, 00:17:39.860 "base_bdevs_list": [ 00:17:39.860 { 00:17:39.860 "name": "spare", 00:17:39.860 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:39.860 "is_configured": true, 00:17:39.860 "data_offset": 256, 00:17:39.860 "data_size": 7936 00:17:39.860 }, 00:17:39.860 { 00:17:39.860 "name": "BaseBdev2", 00:17:39.860 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:39.860 "is_configured": true, 00:17:39.860 "data_offset": 256, 00:17:39.860 "data_size": 7936 00:17:39.860 } 00:17:39.860 ] 00:17:39.860 }' 00:17:39.860 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:40.120 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=700 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.120 "name": "raid_bdev1", 00:17:40.120 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:40.120 "strip_size_kb": 0, 00:17:40.120 "state": "online", 00:17:40.120 "raid_level": "raid1", 00:17:40.120 "superblock": true, 00:17:40.120 "num_base_bdevs": 2, 00:17:40.120 "num_base_bdevs_discovered": 2, 00:17:40.120 "num_base_bdevs_operational": 2, 00:17:40.120 "process": { 00:17:40.120 "type": "rebuild", 00:17:40.120 "target": "spare", 00:17:40.120 "progress": { 00:17:40.120 "blocks": 2816, 00:17:40.120 "percent": 35 00:17:40.120 } 00:17:40.120 }, 00:17:40.120 "base_bdevs_list": [ 00:17:40.120 { 00:17:40.120 "name": "spare", 00:17:40.120 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:40.120 "is_configured": true, 00:17:40.120 "data_offset": 256, 00:17:40.120 "data_size": 7936 00:17:40.120 }, 00:17:40.120 { 00:17:40.120 "name": "BaseBdev2", 00:17:40.120 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:40.120 "is_configured": true, 00:17:40.120 "data_offset": 256, 00:17:40.120 "data_size": 7936 00:17:40.120 } 00:17:40.120 ] 00:17:40.120 }' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.120 21:24:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.498 "name": "raid_bdev1", 00:17:41.498 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:41.498 "strip_size_kb": 0, 00:17:41.498 "state": "online", 00:17:41.498 "raid_level": "raid1", 00:17:41.498 "superblock": true, 00:17:41.498 "num_base_bdevs": 2, 00:17:41.498 "num_base_bdevs_discovered": 2, 00:17:41.498 "num_base_bdevs_operational": 2, 00:17:41.498 "process": { 00:17:41.498 "type": "rebuild", 00:17:41.498 "target": "spare", 00:17:41.498 "progress": { 00:17:41.498 "blocks": 5632, 00:17:41.498 "percent": 70 00:17:41.498 } 00:17:41.498 }, 00:17:41.498 "base_bdevs_list": [ 00:17:41.498 { 00:17:41.498 "name": "spare", 00:17:41.498 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:41.498 "is_configured": true, 00:17:41.498 "data_offset": 256, 00:17:41.498 "data_size": 7936 00:17:41.498 }, 00:17:41.498 { 00:17:41.498 "name": "BaseBdev2", 00:17:41.498 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:41.498 "is_configured": true, 00:17:41.498 "data_offset": 256, 00:17:41.498 "data_size": 7936 00:17:41.498 } 00:17:41.498 ] 00:17:41.498 }' 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.498 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.499 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.499 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.499 21:24:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.067 [2024-11-26 21:25:00.068007] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:42.067 [2024-11-26 21:25:00.068188] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:42.067 [2024-11-26 21:25:00.068322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.325 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.325 "name": "raid_bdev1", 00:17:42.325 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:42.325 "strip_size_kb": 0, 00:17:42.325 "state": "online", 00:17:42.325 "raid_level": "raid1", 00:17:42.325 "superblock": true, 00:17:42.325 "num_base_bdevs": 2, 00:17:42.325 "num_base_bdevs_discovered": 2, 00:17:42.325 "num_base_bdevs_operational": 2, 00:17:42.325 "base_bdevs_list": [ 00:17:42.326 { 00:17:42.326 "name": "spare", 00:17:42.326 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:42.326 "is_configured": true, 00:17:42.326 "data_offset": 256, 00:17:42.326 "data_size": 7936 00:17:42.326 }, 00:17:42.326 { 00:17:42.326 "name": "BaseBdev2", 00:17:42.326 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:42.326 "is_configured": true, 00:17:42.326 "data_offset": 256, 00:17:42.326 "data_size": 7936 00:17:42.326 } 00:17:42.326 ] 00:17:42.326 }' 00:17:42.326 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.585 "name": "raid_bdev1", 00:17:42.585 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:42.585 "strip_size_kb": 0, 00:17:42.585 "state": "online", 00:17:42.585 "raid_level": "raid1", 00:17:42.585 "superblock": true, 00:17:42.585 "num_base_bdevs": 2, 00:17:42.585 "num_base_bdevs_discovered": 2, 00:17:42.585 "num_base_bdevs_operational": 2, 00:17:42.585 "base_bdevs_list": [ 00:17:42.585 { 00:17:42.585 "name": "spare", 00:17:42.585 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:42.585 "is_configured": true, 00:17:42.585 "data_offset": 256, 00:17:42.585 "data_size": 7936 00:17:42.585 }, 00:17:42.585 { 00:17:42.585 "name": "BaseBdev2", 00:17:42.585 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:42.585 "is_configured": true, 00:17:42.585 "data_offset": 256, 00:17:42.585 "data_size": 7936 00:17:42.585 } 00:17:42.585 ] 00:17:42.585 }' 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.585 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.586 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.586 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.586 "name": "raid_bdev1", 00:17:42.586 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:42.586 "strip_size_kb": 0, 00:17:42.586 "state": "online", 00:17:42.586 "raid_level": "raid1", 00:17:42.586 "superblock": true, 00:17:42.586 "num_base_bdevs": 2, 00:17:42.586 "num_base_bdevs_discovered": 2, 00:17:42.586 "num_base_bdevs_operational": 2, 00:17:42.586 "base_bdevs_list": [ 00:17:42.586 { 00:17:42.586 "name": "spare", 00:17:42.586 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:42.586 "is_configured": true, 00:17:42.586 "data_offset": 256, 00:17:42.586 "data_size": 7936 00:17:42.586 }, 00:17:42.586 { 00:17:42.586 "name": "BaseBdev2", 00:17:42.586 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:42.586 "is_configured": true, 00:17:42.586 "data_offset": 256, 00:17:42.586 "data_size": 7936 00:17:42.586 } 00:17:42.586 ] 00:17:42.586 }' 00:17:42.586 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.586 21:25:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.153 [2024-11-26 21:25:01.084229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.153 [2024-11-26 21:25:01.084345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.153 [2024-11-26 21:25:01.084479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.153 [2024-11-26 21:25:01.084577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.153 [2024-11-26 21:25:01.084628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.153 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:43.411 /dev/nbd0 00:17:43.411 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.411 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.411 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.412 1+0 records in 00:17:43.412 1+0 records out 00:17:43.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235013 s, 17.4 MB/s 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.412 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:43.670 /dev/nbd1 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.670 1+0 records in 00:17:43.670 1+0 records out 00:17:43.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039493 s, 10.4 MB/s 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.670 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.671 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.929 21:25:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.188 [2024-11-26 21:25:02.230155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.188 [2024-11-26 21:25:02.230210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.188 [2024-11-26 21:25:02.230234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:44.188 [2024-11-26 21:25:02.230244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.188 [2024-11-26 21:25:02.232551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.188 [2024-11-26 21:25:02.232586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.188 [2024-11-26 21:25:02.232650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.188 [2024-11-26 21:25:02.232711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.188 [2024-11-26 21:25:02.232867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.188 spare 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.188 [2024-11-26 21:25:02.332766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:44.188 [2024-11-26 21:25:02.332795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.188 [2024-11-26 21:25:02.332894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:44.188 [2024-11-26 21:25:02.333044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:44.188 [2024-11-26 21:25:02.333055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:44.188 [2024-11-26 21:25:02.333171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.188 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.447 "name": "raid_bdev1", 00:17:44.447 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:44.447 "strip_size_kb": 0, 00:17:44.447 "state": "online", 00:17:44.447 "raid_level": "raid1", 00:17:44.447 "superblock": true, 00:17:44.447 "num_base_bdevs": 2, 00:17:44.447 "num_base_bdevs_discovered": 2, 00:17:44.447 "num_base_bdevs_operational": 2, 00:17:44.447 "base_bdevs_list": [ 00:17:44.447 { 00:17:44.447 "name": "spare", 00:17:44.447 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:44.447 "is_configured": true, 00:17:44.447 "data_offset": 256, 00:17:44.447 "data_size": 7936 00:17:44.447 }, 00:17:44.447 { 00:17:44.447 "name": "BaseBdev2", 00:17:44.447 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:44.447 "is_configured": true, 00:17:44.447 "data_offset": 256, 00:17:44.447 "data_size": 7936 00:17:44.447 } 00:17:44.447 ] 00:17:44.447 }' 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.447 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.706 "name": "raid_bdev1", 00:17:44.706 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:44.706 "strip_size_kb": 0, 00:17:44.706 "state": "online", 00:17:44.706 "raid_level": "raid1", 00:17:44.706 "superblock": true, 00:17:44.706 "num_base_bdevs": 2, 00:17:44.706 "num_base_bdevs_discovered": 2, 00:17:44.706 "num_base_bdevs_operational": 2, 00:17:44.706 "base_bdevs_list": [ 00:17:44.706 { 00:17:44.706 "name": "spare", 00:17:44.706 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:44.706 "is_configured": true, 00:17:44.706 "data_offset": 256, 00:17:44.706 "data_size": 7936 00:17:44.706 }, 00:17:44.706 { 00:17:44.706 "name": "BaseBdev2", 00:17:44.706 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:44.706 "is_configured": true, 00:17:44.706 "data_offset": 256, 00:17:44.706 "data_size": 7936 00:17:44.706 } 00:17:44.706 ] 00:17:44.706 }' 00:17:44.706 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.965 [2024-11-26 21:25:02.973012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.965 21:25:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.965 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.965 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.965 "name": "raid_bdev1", 00:17:44.965 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:44.965 "strip_size_kb": 0, 00:17:44.965 "state": "online", 00:17:44.965 "raid_level": "raid1", 00:17:44.965 "superblock": true, 00:17:44.965 "num_base_bdevs": 2, 00:17:44.965 "num_base_bdevs_discovered": 1, 00:17:44.965 "num_base_bdevs_operational": 1, 00:17:44.965 "base_bdevs_list": [ 00:17:44.965 { 00:17:44.965 "name": null, 00:17:44.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.965 "is_configured": false, 00:17:44.965 "data_offset": 0, 00:17:44.965 "data_size": 7936 00:17:44.965 }, 00:17:44.965 { 00:17:44.965 "name": "BaseBdev2", 00:17:44.965 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:44.965 "is_configured": true, 00:17:44.965 "data_offset": 256, 00:17:44.965 "data_size": 7936 00:17:44.965 } 00:17:44.965 ] 00:17:44.965 }' 00:17:44.965 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.965 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.533 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.534 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.534 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.534 [2024-11-26 21:25:03.408311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.534 [2024-11-26 21:25:03.408573] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.534 [2024-11-26 21:25:03.408591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.534 [2024-11-26 21:25:03.408646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.534 [2024-11-26 21:25:03.422397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:45.534 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.534 21:25:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.534 [2024-11-26 21:25:03.424581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.468 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.468 "name": "raid_bdev1", 00:17:46.468 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:46.468 "strip_size_kb": 0, 00:17:46.468 "state": "online", 00:17:46.468 "raid_level": "raid1", 00:17:46.468 "superblock": true, 00:17:46.468 "num_base_bdevs": 2, 00:17:46.468 "num_base_bdevs_discovered": 2, 00:17:46.468 "num_base_bdevs_operational": 2, 00:17:46.468 "process": { 00:17:46.468 "type": "rebuild", 00:17:46.468 "target": "spare", 00:17:46.468 "progress": { 00:17:46.468 "blocks": 2560, 00:17:46.468 "percent": 32 00:17:46.468 } 00:17:46.468 }, 00:17:46.468 "base_bdevs_list": [ 00:17:46.468 { 00:17:46.468 "name": "spare", 00:17:46.468 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:46.468 "is_configured": true, 00:17:46.468 "data_offset": 256, 00:17:46.468 "data_size": 7936 00:17:46.468 }, 00:17:46.469 { 00:17:46.469 "name": "BaseBdev2", 00:17:46.469 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:46.469 "is_configured": true, 00:17:46.469 "data_offset": 256, 00:17:46.469 "data_size": 7936 00:17:46.469 } 00:17:46.469 ] 00:17:46.469 }' 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.469 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.469 [2024-11-26 21:25:04.581320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.728 [2024-11-26 21:25:04.634047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.728 [2024-11-26 21:25:04.634114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.728 [2024-11-26 21:25:04.634129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.728 [2024-11-26 21:25:04.634152] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.728 "name": "raid_bdev1", 00:17:46.728 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:46.728 "strip_size_kb": 0, 00:17:46.728 "state": "online", 00:17:46.728 "raid_level": "raid1", 00:17:46.728 "superblock": true, 00:17:46.728 "num_base_bdevs": 2, 00:17:46.728 "num_base_bdevs_discovered": 1, 00:17:46.728 "num_base_bdevs_operational": 1, 00:17:46.728 "base_bdevs_list": [ 00:17:46.728 { 00:17:46.728 "name": null, 00:17:46.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.728 "is_configured": false, 00:17:46.728 "data_offset": 0, 00:17:46.728 "data_size": 7936 00:17:46.728 }, 00:17:46.728 { 00:17:46.728 "name": "BaseBdev2", 00:17:46.728 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:46.728 "is_configured": true, 00:17:46.728 "data_offset": 256, 00:17:46.728 "data_size": 7936 00:17:46.728 } 00:17:46.728 ] 00:17:46.728 }' 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.728 21:25:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.987 21:25:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.987 21:25:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.987 21:25:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.245 [2024-11-26 21:25:05.143459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.245 [2024-11-26 21:25:05.143541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.245 [2024-11-26 21:25:05.143572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:47.245 [2024-11-26 21:25:05.143583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.245 [2024-11-26 21:25:05.143911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.245 [2024-11-26 21:25:05.143938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.245 [2024-11-26 21:25:05.144028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.245 [2024-11-26 21:25:05.144046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.245 [2024-11-26 21:25:05.144059] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.245 [2024-11-26 21:25:05.144085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.245 [2024-11-26 21:25:05.157947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:47.245 spare 00:17:47.245 21:25:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.245 21:25:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:47.245 [2024-11-26 21:25:05.160081] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.181 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.182 "name": "raid_bdev1", 00:17:48.182 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:48.182 "strip_size_kb": 0, 00:17:48.182 "state": "online", 00:17:48.182 "raid_level": "raid1", 00:17:48.182 "superblock": true, 00:17:48.182 "num_base_bdevs": 2, 00:17:48.182 "num_base_bdevs_discovered": 2, 00:17:48.182 "num_base_bdevs_operational": 2, 00:17:48.182 "process": { 00:17:48.182 "type": "rebuild", 00:17:48.182 "target": "spare", 00:17:48.182 "progress": { 00:17:48.182 "blocks": 2560, 00:17:48.182 "percent": 32 00:17:48.182 } 00:17:48.182 }, 00:17:48.182 "base_bdevs_list": [ 00:17:48.182 { 00:17:48.182 "name": "spare", 00:17:48.182 "uuid": "3357e583-6eb2-5c5e-8bf2-27203423ce56", 00:17:48.182 "is_configured": true, 00:17:48.182 "data_offset": 256, 00:17:48.182 "data_size": 7936 00:17:48.182 }, 00:17:48.182 { 00:17:48.182 "name": "BaseBdev2", 00:17:48.182 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:48.182 "is_configured": true, 00:17:48.182 "data_offset": 256, 00:17:48.182 "data_size": 7936 00:17:48.182 } 00:17:48.182 ] 00:17:48.182 }' 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.182 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.182 [2024-11-26 21:25:06.320904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.441 [2024-11-26 21:25:06.369680] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.441 [2024-11-26 21:25:06.369743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.441 [2024-11-26 21:25:06.369762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.441 [2024-11-26 21:25:06.369769] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.441 "name": "raid_bdev1", 00:17:48.441 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:48.441 "strip_size_kb": 0, 00:17:48.441 "state": "online", 00:17:48.441 "raid_level": "raid1", 00:17:48.441 "superblock": true, 00:17:48.441 "num_base_bdevs": 2, 00:17:48.441 "num_base_bdevs_discovered": 1, 00:17:48.441 "num_base_bdevs_operational": 1, 00:17:48.441 "base_bdevs_list": [ 00:17:48.441 { 00:17:48.441 "name": null, 00:17:48.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.441 "is_configured": false, 00:17:48.441 "data_offset": 0, 00:17:48.441 "data_size": 7936 00:17:48.441 }, 00:17:48.441 { 00:17:48.441 "name": "BaseBdev2", 00:17:48.441 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:48.441 "is_configured": true, 00:17:48.441 "data_offset": 256, 00:17:48.441 "data_size": 7936 00:17:48.441 } 00:17:48.441 ] 00:17:48.441 }' 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.441 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.700 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.958 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.958 "name": "raid_bdev1", 00:17:48.958 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:48.958 "strip_size_kb": 0, 00:17:48.958 "state": "online", 00:17:48.958 "raid_level": "raid1", 00:17:48.958 "superblock": true, 00:17:48.958 "num_base_bdevs": 2, 00:17:48.958 "num_base_bdevs_discovered": 1, 00:17:48.958 "num_base_bdevs_operational": 1, 00:17:48.958 "base_bdevs_list": [ 00:17:48.959 { 00:17:48.959 "name": null, 00:17:48.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.959 "is_configured": false, 00:17:48.959 "data_offset": 0, 00:17:48.959 "data_size": 7936 00:17:48.959 }, 00:17:48.959 { 00:17:48.959 "name": "BaseBdev2", 00:17:48.959 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:48.959 "is_configured": true, 00:17:48.959 "data_offset": 256, 00:17:48.959 "data_size": 7936 00:17:48.959 } 00:17:48.959 ] 00:17:48.959 }' 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.959 [2024-11-26 21:25:06.958891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.959 [2024-11-26 21:25:06.958955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.959 [2024-11-26 21:25:06.958994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:48.959 [2024-11-26 21:25:06.959003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.959 [2024-11-26 21:25:06.959296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.959 [2024-11-26 21:25:06.959314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.959 [2024-11-26 21:25:06.959374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:48.959 [2024-11-26 21:25:06.959389] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.959 [2024-11-26 21:25:06.959401] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.959 [2024-11-26 21:25:06.959413] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:48.959 BaseBdev1 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.959 21:25:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 21:25:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.896 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.896 "name": "raid_bdev1", 00:17:49.896 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:49.896 "strip_size_kb": 0, 00:17:49.896 "state": "online", 00:17:49.896 "raid_level": "raid1", 00:17:49.896 "superblock": true, 00:17:49.896 "num_base_bdevs": 2, 00:17:49.896 "num_base_bdevs_discovered": 1, 00:17:49.896 "num_base_bdevs_operational": 1, 00:17:49.896 "base_bdevs_list": [ 00:17:49.896 { 00:17:49.896 "name": null, 00:17:49.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.896 "is_configured": false, 00:17:49.896 "data_offset": 0, 00:17:49.896 "data_size": 7936 00:17:49.896 }, 00:17:49.896 { 00:17:49.896 "name": "BaseBdev2", 00:17:49.896 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:49.896 "is_configured": true, 00:17:49.896 "data_offset": 256, 00:17:49.896 "data_size": 7936 00:17:49.896 } 00:17:49.896 ] 00:17:49.896 }' 00:17:49.896 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.896 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.484 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.484 "name": "raid_bdev1", 00:17:50.484 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:50.484 "strip_size_kb": 0, 00:17:50.484 "state": "online", 00:17:50.484 "raid_level": "raid1", 00:17:50.484 "superblock": true, 00:17:50.484 "num_base_bdevs": 2, 00:17:50.484 "num_base_bdevs_discovered": 1, 00:17:50.484 "num_base_bdevs_operational": 1, 00:17:50.484 "base_bdevs_list": [ 00:17:50.484 { 00:17:50.484 "name": null, 00:17:50.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.484 "is_configured": false, 00:17:50.484 "data_offset": 0, 00:17:50.484 "data_size": 7936 00:17:50.484 }, 00:17:50.484 { 00:17:50.484 "name": "BaseBdev2", 00:17:50.484 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:50.485 "is_configured": true, 00:17:50.485 "data_offset": 256, 00:17:50.485 "data_size": 7936 00:17:50.485 } 00:17:50.485 ] 00:17:50.485 }' 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.485 [2024-11-26 21:25:08.580249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.485 [2024-11-26 21:25:08.580464] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.485 [2024-11-26 21:25:08.580480] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:50.485 request: 00:17:50.485 { 00:17:50.485 "base_bdev": "BaseBdev1", 00:17:50.485 "raid_bdev": "raid_bdev1", 00:17:50.485 "method": "bdev_raid_add_base_bdev", 00:17:50.485 "req_id": 1 00:17:50.485 } 00:17:50.485 Got JSON-RPC error response 00:17:50.485 response: 00:17:50.485 { 00:17:50.485 "code": -22, 00:17:50.485 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:50.485 } 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.485 21:25:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.865 "name": "raid_bdev1", 00:17:51.865 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:51.865 "strip_size_kb": 0, 00:17:51.865 "state": "online", 00:17:51.865 "raid_level": "raid1", 00:17:51.865 "superblock": true, 00:17:51.865 "num_base_bdevs": 2, 00:17:51.865 "num_base_bdevs_discovered": 1, 00:17:51.865 "num_base_bdevs_operational": 1, 00:17:51.865 "base_bdevs_list": [ 00:17:51.865 { 00:17:51.865 "name": null, 00:17:51.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.865 "is_configured": false, 00:17:51.865 "data_offset": 0, 00:17:51.865 "data_size": 7936 00:17:51.865 }, 00:17:51.865 { 00:17:51.865 "name": "BaseBdev2", 00:17:51.865 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:51.865 "is_configured": true, 00:17:51.865 "data_offset": 256, 00:17:51.865 "data_size": 7936 00:17:51.865 } 00:17:51.865 ] 00:17:51.865 }' 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.865 21:25:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.124 "name": "raid_bdev1", 00:17:52.124 "uuid": "42a04454-c33c-4766-8ae9-0433c4d5294d", 00:17:52.124 "strip_size_kb": 0, 00:17:52.124 "state": "online", 00:17:52.124 "raid_level": "raid1", 00:17:52.124 "superblock": true, 00:17:52.124 "num_base_bdevs": 2, 00:17:52.124 "num_base_bdevs_discovered": 1, 00:17:52.124 "num_base_bdevs_operational": 1, 00:17:52.124 "base_bdevs_list": [ 00:17:52.124 { 00:17:52.124 "name": null, 00:17:52.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.124 "is_configured": false, 00:17:52.124 "data_offset": 0, 00:17:52.124 "data_size": 7936 00:17:52.124 }, 00:17:52.124 { 00:17:52.124 "name": "BaseBdev2", 00:17:52.124 "uuid": "2cb41c3d-6845-5d82-acbd-355d3d3caf28", 00:17:52.124 "is_configured": true, 00:17:52.124 "data_offset": 256, 00:17:52.124 "data_size": 7936 00:17:52.124 } 00:17:52.124 ] 00:17:52.124 }' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87563 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87563 ']' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87563 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87563 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.124 killing process with pid 87563 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87563' 00:17:52.124 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87563 00:17:52.124 Received shutdown signal, test time was about 60.000000 seconds 00:17:52.124 00:17:52.124 Latency(us) 00:17:52.124 [2024-11-26T21:25:10.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.124 [2024-11-26T21:25:10.280Z] =================================================================================================================== 00:17:52.124 [2024-11-26T21:25:10.281Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.125 [2024-11-26 21:25:10.231889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.125 [2024-11-26 21:25:10.232064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.125 21:25:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87563 00:17:52.125 [2024-11-26 21:25:10.232126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.125 [2024-11-26 21:25:10.232139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:52.691 [2024-11-26 21:25:10.572889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.629 21:25:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:53.629 00:17:53.629 real 0m19.849s 00:17:53.629 user 0m25.745s 00:17:53.629 sys 0m2.729s 00:17:53.629 21:25:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.629 21:25:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.629 ************************************ 00:17:53.629 END TEST raid_rebuild_test_sb_md_separate 00:17:53.629 ************************************ 00:17:53.889 21:25:11 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:53.889 21:25:11 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:53.889 21:25:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:53.889 21:25:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.889 21:25:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.889 ************************************ 00:17:53.889 START TEST raid_state_function_test_sb_md_interleaved 00:17:53.889 ************************************ 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88253 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88253' 00:17:53.889 Process raid pid: 88253 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88253 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88253 ']' 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.889 21:25:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.889 [2024-11-26 21:25:11.921251] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:53.889 [2024-11-26 21:25:11.921386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.149 [2024-11-26 21:25:12.096371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.149 [2024-11-26 21:25:12.221461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.407 [2024-11-26 21:25:12.459187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.407 [2024-11-26 21:25:12.459229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.665 [2024-11-26 21:25:12.743983] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.665 [2024-11-26 21:25:12.744038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.665 [2024-11-26 21:25:12.744049] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.665 [2024-11-26 21:25:12.744058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:54.665 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.666 "name": "Existed_Raid", 00:17:54.666 "uuid": "da5420a4-67fc-4577-ac7e-0f3840983fa6", 00:17:54.666 "strip_size_kb": 0, 00:17:54.666 "state": "configuring", 00:17:54.666 "raid_level": "raid1", 00:17:54.666 "superblock": true, 00:17:54.666 "num_base_bdevs": 2, 00:17:54.666 "num_base_bdevs_discovered": 0, 00:17:54.666 "num_base_bdevs_operational": 2, 00:17:54.666 "base_bdevs_list": [ 00:17:54.666 { 00:17:54.666 "name": "BaseBdev1", 00:17:54.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.666 "is_configured": false, 00:17:54.666 "data_offset": 0, 00:17:54.666 "data_size": 0 00:17:54.666 }, 00:17:54.666 { 00:17:54.666 "name": "BaseBdev2", 00:17:54.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.666 "is_configured": false, 00:17:54.666 "data_offset": 0, 00:17:54.666 "data_size": 0 00:17:54.666 } 00:17:54.666 ] 00:17:54.666 }' 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.666 21:25:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.233 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.233 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.233 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.233 [2024-11-26 21:25:13.103267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.233 [2024-11-26 21:25:13.103304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.234 [2024-11-26 21:25:13.115241] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.234 [2024-11-26 21:25:13.115280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.234 [2024-11-26 21:25:13.115288] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.234 [2024-11-26 21:25:13.115301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.234 [2024-11-26 21:25:13.169009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.234 BaseBdev1 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.234 [ 00:17:55.234 { 00:17:55.234 "name": "BaseBdev1", 00:17:55.234 "aliases": [ 00:17:55.234 "08e28c2b-9af9-40fa-827d-633156303f1b" 00:17:55.234 ], 00:17:55.234 "product_name": "Malloc disk", 00:17:55.234 "block_size": 4128, 00:17:55.234 "num_blocks": 8192, 00:17:55.234 "uuid": "08e28c2b-9af9-40fa-827d-633156303f1b", 00:17:55.234 "md_size": 32, 00:17:55.234 "md_interleave": true, 00:17:55.234 "dif_type": 0, 00:17:55.234 "assigned_rate_limits": { 00:17:55.234 "rw_ios_per_sec": 0, 00:17:55.234 "rw_mbytes_per_sec": 0, 00:17:55.234 "r_mbytes_per_sec": 0, 00:17:55.234 "w_mbytes_per_sec": 0 00:17:55.234 }, 00:17:55.234 "claimed": true, 00:17:55.234 "claim_type": "exclusive_write", 00:17:55.234 "zoned": false, 00:17:55.234 "supported_io_types": { 00:17:55.234 "read": true, 00:17:55.234 "write": true, 00:17:55.234 "unmap": true, 00:17:55.234 "flush": true, 00:17:55.234 "reset": true, 00:17:55.234 "nvme_admin": false, 00:17:55.234 "nvme_io": false, 00:17:55.234 "nvme_io_md": false, 00:17:55.234 "write_zeroes": true, 00:17:55.234 "zcopy": true, 00:17:55.234 "get_zone_info": false, 00:17:55.234 "zone_management": false, 00:17:55.234 "zone_append": false, 00:17:55.234 "compare": false, 00:17:55.234 "compare_and_write": false, 00:17:55.234 "abort": true, 00:17:55.234 "seek_hole": false, 00:17:55.234 "seek_data": false, 00:17:55.234 "copy": true, 00:17:55.234 "nvme_iov_md": false 00:17:55.234 }, 00:17:55.234 "memory_domains": [ 00:17:55.234 { 00:17:55.234 "dma_device_id": "system", 00:17:55.234 "dma_device_type": 1 00:17:55.234 }, 00:17:55.234 { 00:17:55.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.234 "dma_device_type": 2 00:17:55.234 } 00:17:55.234 ], 00:17:55.234 "driver_specific": {} 00:17:55.234 } 00:17:55.234 ] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.234 "name": "Existed_Raid", 00:17:55.234 "uuid": "0b99fb16-0343-4bc7-ab9f-6a800421a492", 00:17:55.234 "strip_size_kb": 0, 00:17:55.234 "state": "configuring", 00:17:55.234 "raid_level": "raid1", 00:17:55.234 "superblock": true, 00:17:55.234 "num_base_bdevs": 2, 00:17:55.234 "num_base_bdevs_discovered": 1, 00:17:55.234 "num_base_bdevs_operational": 2, 00:17:55.234 "base_bdevs_list": [ 00:17:55.234 { 00:17:55.234 "name": "BaseBdev1", 00:17:55.234 "uuid": "08e28c2b-9af9-40fa-827d-633156303f1b", 00:17:55.234 "is_configured": true, 00:17:55.234 "data_offset": 256, 00:17:55.234 "data_size": 7936 00:17:55.234 }, 00:17:55.234 { 00:17:55.234 "name": "BaseBdev2", 00:17:55.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.234 "is_configured": false, 00:17:55.234 "data_offset": 0, 00:17:55.234 "data_size": 0 00:17:55.234 } 00:17:55.234 ] 00:17:55.234 }' 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.234 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.494 [2024-11-26 21:25:13.632272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.494 [2024-11-26 21:25:13.632314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.494 [2024-11-26 21:25:13.640312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.494 [2024-11-26 21:25:13.642287] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.494 [2024-11-26 21:25:13.642326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.494 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.753 "name": "Existed_Raid", 00:17:55.753 "uuid": "e8bd1b85-7ea5-44d4-84ba-c27c4f800017", 00:17:55.753 "strip_size_kb": 0, 00:17:55.753 "state": "configuring", 00:17:55.753 "raid_level": "raid1", 00:17:55.753 "superblock": true, 00:17:55.753 "num_base_bdevs": 2, 00:17:55.753 "num_base_bdevs_discovered": 1, 00:17:55.753 "num_base_bdevs_operational": 2, 00:17:55.753 "base_bdevs_list": [ 00:17:55.753 { 00:17:55.753 "name": "BaseBdev1", 00:17:55.753 "uuid": "08e28c2b-9af9-40fa-827d-633156303f1b", 00:17:55.753 "is_configured": true, 00:17:55.753 "data_offset": 256, 00:17:55.753 "data_size": 7936 00:17:55.753 }, 00:17:55.753 { 00:17:55.753 "name": "BaseBdev2", 00:17:55.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.753 "is_configured": false, 00:17:55.753 "data_offset": 0, 00:17:55.753 "data_size": 0 00:17:55.753 } 00:17:55.753 ] 00:17:55.753 }' 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.753 21:25:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.012 [2024-11-26 21:25:14.127299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.012 [2024-11-26 21:25:14.127516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.012 [2024-11-26 21:25:14.127530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:56.012 [2024-11-26 21:25:14.127619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:56.012 [2024-11-26 21:25:14.127711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.012 [2024-11-26 21:25:14.127728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:56.012 [2024-11-26 21:25:14.127789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.012 BaseBdev2 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.012 [ 00:17:56.012 { 00:17:56.012 "name": "BaseBdev2", 00:17:56.012 "aliases": [ 00:17:56.012 "b6155210-dfa9-4014-bc2f-49016786756e" 00:17:56.012 ], 00:17:56.012 "product_name": "Malloc disk", 00:17:56.012 "block_size": 4128, 00:17:56.012 "num_blocks": 8192, 00:17:56.012 "uuid": "b6155210-dfa9-4014-bc2f-49016786756e", 00:17:56.012 "md_size": 32, 00:17:56.012 "md_interleave": true, 00:17:56.012 "dif_type": 0, 00:17:56.012 "assigned_rate_limits": { 00:17:56.012 "rw_ios_per_sec": 0, 00:17:56.012 "rw_mbytes_per_sec": 0, 00:17:56.012 "r_mbytes_per_sec": 0, 00:17:56.012 "w_mbytes_per_sec": 0 00:17:56.012 }, 00:17:56.012 "claimed": true, 00:17:56.012 "claim_type": "exclusive_write", 00:17:56.012 "zoned": false, 00:17:56.012 "supported_io_types": { 00:17:56.012 "read": true, 00:17:56.012 "write": true, 00:17:56.012 "unmap": true, 00:17:56.012 "flush": true, 00:17:56.012 "reset": true, 00:17:56.012 "nvme_admin": false, 00:17:56.012 "nvme_io": false, 00:17:56.012 "nvme_io_md": false, 00:17:56.012 "write_zeroes": true, 00:17:56.012 "zcopy": true, 00:17:56.012 "get_zone_info": false, 00:17:56.012 "zone_management": false, 00:17:56.012 "zone_append": false, 00:17:56.012 "compare": false, 00:17:56.012 "compare_and_write": false, 00:17:56.012 "abort": true, 00:17:56.012 "seek_hole": false, 00:17:56.012 "seek_data": false, 00:17:56.012 "copy": true, 00:17:56.012 "nvme_iov_md": false 00:17:56.012 }, 00:17:56.012 "memory_domains": [ 00:17:56.012 { 00:17:56.012 "dma_device_id": "system", 00:17:56.012 "dma_device_type": 1 00:17:56.012 }, 00:17:56.012 { 00:17:56.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.012 "dma_device_type": 2 00:17:56.012 } 00:17:56.012 ], 00:17:56.012 "driver_specific": {} 00:17:56.012 } 00:17:56.012 ] 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:56.012 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.013 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.271 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.271 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.271 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.271 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.271 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.272 "name": "Existed_Raid", 00:17:56.272 "uuid": "e8bd1b85-7ea5-44d4-84ba-c27c4f800017", 00:17:56.272 "strip_size_kb": 0, 00:17:56.272 "state": "online", 00:17:56.272 "raid_level": "raid1", 00:17:56.272 "superblock": true, 00:17:56.272 "num_base_bdevs": 2, 00:17:56.272 "num_base_bdevs_discovered": 2, 00:17:56.272 "num_base_bdevs_operational": 2, 00:17:56.272 "base_bdevs_list": [ 00:17:56.272 { 00:17:56.272 "name": "BaseBdev1", 00:17:56.272 "uuid": "08e28c2b-9af9-40fa-827d-633156303f1b", 00:17:56.272 "is_configured": true, 00:17:56.272 "data_offset": 256, 00:17:56.272 "data_size": 7936 00:17:56.272 }, 00:17:56.272 { 00:17:56.272 "name": "BaseBdev2", 00:17:56.272 "uuid": "b6155210-dfa9-4014-bc2f-49016786756e", 00:17:56.272 "is_configured": true, 00:17:56.272 "data_offset": 256, 00:17:56.272 "data_size": 7936 00:17:56.272 } 00:17:56.272 ] 00:17:56.272 }' 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.272 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.530 [2024-11-26 21:25:14.598792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.530 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.530 "name": "Existed_Raid", 00:17:56.530 "aliases": [ 00:17:56.530 "e8bd1b85-7ea5-44d4-84ba-c27c4f800017" 00:17:56.530 ], 00:17:56.530 "product_name": "Raid Volume", 00:17:56.530 "block_size": 4128, 00:17:56.530 "num_blocks": 7936, 00:17:56.530 "uuid": "e8bd1b85-7ea5-44d4-84ba-c27c4f800017", 00:17:56.530 "md_size": 32, 00:17:56.530 "md_interleave": true, 00:17:56.530 "dif_type": 0, 00:17:56.530 "assigned_rate_limits": { 00:17:56.530 "rw_ios_per_sec": 0, 00:17:56.530 "rw_mbytes_per_sec": 0, 00:17:56.530 "r_mbytes_per_sec": 0, 00:17:56.530 "w_mbytes_per_sec": 0 00:17:56.530 }, 00:17:56.530 "claimed": false, 00:17:56.530 "zoned": false, 00:17:56.530 "supported_io_types": { 00:17:56.530 "read": true, 00:17:56.531 "write": true, 00:17:56.531 "unmap": false, 00:17:56.531 "flush": false, 00:17:56.531 "reset": true, 00:17:56.531 "nvme_admin": false, 00:17:56.531 "nvme_io": false, 00:17:56.531 "nvme_io_md": false, 00:17:56.531 "write_zeroes": true, 00:17:56.531 "zcopy": false, 00:17:56.531 "get_zone_info": false, 00:17:56.531 "zone_management": false, 00:17:56.531 "zone_append": false, 00:17:56.531 "compare": false, 00:17:56.531 "compare_and_write": false, 00:17:56.531 "abort": false, 00:17:56.531 "seek_hole": false, 00:17:56.531 "seek_data": false, 00:17:56.531 "copy": false, 00:17:56.531 "nvme_iov_md": false 00:17:56.531 }, 00:17:56.531 "memory_domains": [ 00:17:56.531 { 00:17:56.531 "dma_device_id": "system", 00:17:56.531 "dma_device_type": 1 00:17:56.531 }, 00:17:56.531 { 00:17:56.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.531 "dma_device_type": 2 00:17:56.531 }, 00:17:56.531 { 00:17:56.531 "dma_device_id": "system", 00:17:56.531 "dma_device_type": 1 00:17:56.531 }, 00:17:56.531 { 00:17:56.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.531 "dma_device_type": 2 00:17:56.531 } 00:17:56.531 ], 00:17:56.531 "driver_specific": { 00:17:56.531 "raid": { 00:17:56.531 "uuid": "e8bd1b85-7ea5-44d4-84ba-c27c4f800017", 00:17:56.531 "strip_size_kb": 0, 00:17:56.531 "state": "online", 00:17:56.531 "raid_level": "raid1", 00:17:56.531 "superblock": true, 00:17:56.531 "num_base_bdevs": 2, 00:17:56.531 "num_base_bdevs_discovered": 2, 00:17:56.531 "num_base_bdevs_operational": 2, 00:17:56.531 "base_bdevs_list": [ 00:17:56.531 { 00:17:56.531 "name": "BaseBdev1", 00:17:56.531 "uuid": "08e28c2b-9af9-40fa-827d-633156303f1b", 00:17:56.531 "is_configured": true, 00:17:56.531 "data_offset": 256, 00:17:56.531 "data_size": 7936 00:17:56.531 }, 00:17:56.531 { 00:17:56.531 "name": "BaseBdev2", 00:17:56.531 "uuid": "b6155210-dfa9-4014-bc2f-49016786756e", 00:17:56.531 "is_configured": true, 00:17:56.531 "data_offset": 256, 00:17:56.531 "data_size": 7936 00:17:56.531 } 00:17:56.531 ] 00:17:56.531 } 00:17:56.531 } 00:17:56.531 }' 00:17:56.531 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.531 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:56.531 BaseBdev2' 00:17:56.531 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.790 [2024-11-26 21:25:14.766287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.790 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.791 "name": "Existed_Raid", 00:17:56.791 "uuid": "e8bd1b85-7ea5-44d4-84ba-c27c4f800017", 00:17:56.791 "strip_size_kb": 0, 00:17:56.791 "state": "online", 00:17:56.791 "raid_level": "raid1", 00:17:56.791 "superblock": true, 00:17:56.791 "num_base_bdevs": 2, 00:17:56.791 "num_base_bdevs_discovered": 1, 00:17:56.791 "num_base_bdevs_operational": 1, 00:17:56.791 "base_bdevs_list": [ 00:17:56.791 { 00:17:56.791 "name": null, 00:17:56.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.791 "is_configured": false, 00:17:56.791 "data_offset": 0, 00:17:56.791 "data_size": 7936 00:17:56.791 }, 00:17:56.791 { 00:17:56.791 "name": "BaseBdev2", 00:17:56.791 "uuid": "b6155210-dfa9-4014-bc2f-49016786756e", 00:17:56.791 "is_configured": true, 00:17:56.791 "data_offset": 256, 00:17:56.791 "data_size": 7936 00:17:56.791 } 00:17:56.791 ] 00:17:56.791 }' 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.791 21:25:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 [2024-11-26 21:25:15.367975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.359 [2024-11-26 21:25:15.368101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.359 [2024-11-26 21:25:15.467470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.359 [2024-11-26 21:25:15.467526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.359 [2024-11-26 21:25:15.467541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88253 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88253 ']' 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88253 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88253 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.618 killing process with pid 88253 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88253' 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88253 00:17:57.618 [2024-11-26 21:25:15.553508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.618 21:25:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88253 00:17:57.618 [2024-11-26 21:25:15.571113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.998 21:25:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:58.998 00:17:58.998 real 0m4.917s 00:17:58.998 user 0m6.866s 00:17:58.998 sys 0m0.944s 00:17:58.998 21:25:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.998 21:25:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.998 ************************************ 00:17:58.998 END TEST raid_state_function_test_sb_md_interleaved 00:17:58.998 ************************************ 00:17:58.998 21:25:16 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:58.998 21:25:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:58.998 21:25:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.998 21:25:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.998 ************************************ 00:17:58.998 START TEST raid_superblock_test_md_interleaved 00:17:58.998 ************************************ 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88501 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88501 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88501 ']' 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.998 21:25:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.998 [2024-11-26 21:25:16.910938] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:58.998 [2024-11-26 21:25:16.911067] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88501 ] 00:17:58.998 [2024-11-26 21:25:17.086673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.257 [2024-11-26 21:25:17.208512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.516 [2024-11-26 21:25:17.448999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.516 [2024-11-26 21:25:17.449035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.776 malloc1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.776 [2024-11-26 21:25:17.776179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.776 [2024-11-26 21:25:17.776238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.776 [2024-11-26 21:25:17.776264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:59.776 [2024-11-26 21:25:17.776274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.776 [2024-11-26 21:25:17.778330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.776 [2024-11-26 21:25:17.778364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.776 pt1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.776 malloc2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.776 [2024-11-26 21:25:17.835570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.776 [2024-11-26 21:25:17.835623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.776 [2024-11-26 21:25:17.835647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:59.776 [2024-11-26 21:25:17.835656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.776 [2024-11-26 21:25:17.837736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.776 [2024-11-26 21:25:17.837767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.776 pt2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.776 [2024-11-26 21:25:17.847590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.776 [2024-11-26 21:25:17.849620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.776 [2024-11-26 21:25:17.849806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:59.776 [2024-11-26 21:25:17.849819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.776 [2024-11-26 21:25:17.849894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:59.776 [2024-11-26 21:25:17.849976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:59.776 [2024-11-26 21:25:17.849994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:59.776 [2024-11-26 21:25:17.850060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.776 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.776 "name": "raid_bdev1", 00:17:59.776 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:17:59.777 "strip_size_kb": 0, 00:17:59.777 "state": "online", 00:17:59.777 "raid_level": "raid1", 00:17:59.777 "superblock": true, 00:17:59.777 "num_base_bdevs": 2, 00:17:59.777 "num_base_bdevs_discovered": 2, 00:17:59.777 "num_base_bdevs_operational": 2, 00:17:59.777 "base_bdevs_list": [ 00:17:59.777 { 00:17:59.777 "name": "pt1", 00:17:59.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.777 "is_configured": true, 00:17:59.777 "data_offset": 256, 00:17:59.777 "data_size": 7936 00:17:59.777 }, 00:17:59.777 { 00:17:59.777 "name": "pt2", 00:17:59.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.777 "is_configured": true, 00:17:59.777 "data_offset": 256, 00:17:59.777 "data_size": 7936 00:17:59.777 } 00:17:59.777 ] 00:17:59.777 }' 00:17:59.777 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.777 21:25:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:00.345 [2024-11-26 21:25:18.275102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.345 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.345 "name": "raid_bdev1", 00:18:00.345 "aliases": [ 00:18:00.345 "7c94ff16-0180-41f5-b97c-4ba8c5661b40" 00:18:00.345 ], 00:18:00.345 "product_name": "Raid Volume", 00:18:00.345 "block_size": 4128, 00:18:00.345 "num_blocks": 7936, 00:18:00.345 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:00.345 "md_size": 32, 00:18:00.345 "md_interleave": true, 00:18:00.345 "dif_type": 0, 00:18:00.345 "assigned_rate_limits": { 00:18:00.345 "rw_ios_per_sec": 0, 00:18:00.345 "rw_mbytes_per_sec": 0, 00:18:00.345 "r_mbytes_per_sec": 0, 00:18:00.345 "w_mbytes_per_sec": 0 00:18:00.345 }, 00:18:00.345 "claimed": false, 00:18:00.345 "zoned": false, 00:18:00.345 "supported_io_types": { 00:18:00.345 "read": true, 00:18:00.345 "write": true, 00:18:00.345 "unmap": false, 00:18:00.345 "flush": false, 00:18:00.345 "reset": true, 00:18:00.345 "nvme_admin": false, 00:18:00.345 "nvme_io": false, 00:18:00.345 "nvme_io_md": false, 00:18:00.346 "write_zeroes": true, 00:18:00.346 "zcopy": false, 00:18:00.346 "get_zone_info": false, 00:18:00.346 "zone_management": false, 00:18:00.346 "zone_append": false, 00:18:00.346 "compare": false, 00:18:00.346 "compare_and_write": false, 00:18:00.346 "abort": false, 00:18:00.346 "seek_hole": false, 00:18:00.346 "seek_data": false, 00:18:00.346 "copy": false, 00:18:00.346 "nvme_iov_md": false 00:18:00.346 }, 00:18:00.346 "memory_domains": [ 00:18:00.346 { 00:18:00.346 "dma_device_id": "system", 00:18:00.346 "dma_device_type": 1 00:18:00.346 }, 00:18:00.346 { 00:18:00.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.346 "dma_device_type": 2 00:18:00.346 }, 00:18:00.346 { 00:18:00.346 "dma_device_id": "system", 00:18:00.346 "dma_device_type": 1 00:18:00.346 }, 00:18:00.346 { 00:18:00.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.346 "dma_device_type": 2 00:18:00.346 } 00:18:00.346 ], 00:18:00.346 "driver_specific": { 00:18:00.346 "raid": { 00:18:00.346 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:00.346 "strip_size_kb": 0, 00:18:00.346 "state": "online", 00:18:00.346 "raid_level": "raid1", 00:18:00.346 "superblock": true, 00:18:00.346 "num_base_bdevs": 2, 00:18:00.346 "num_base_bdevs_discovered": 2, 00:18:00.346 "num_base_bdevs_operational": 2, 00:18:00.346 "base_bdevs_list": [ 00:18:00.346 { 00:18:00.346 "name": "pt1", 00:18:00.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.346 "is_configured": true, 00:18:00.346 "data_offset": 256, 00:18:00.346 "data_size": 7936 00:18:00.346 }, 00:18:00.346 { 00:18:00.346 "name": "pt2", 00:18:00.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.346 "is_configured": true, 00:18:00.346 "data_offset": 256, 00:18:00.346 "data_size": 7936 00:18:00.346 } 00:18:00.346 ] 00:18:00.346 } 00:18:00.346 } 00:18:00.346 }' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:00.346 pt2' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:00.346 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:00.605 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.605 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.605 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.605 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:00.605 [2024-11-26 21:25:18.506648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.605 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7c94ff16-0180-41f5-b97c-4ba8c5661b40 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7c94ff16-0180-41f5-b97c-4ba8c5661b40 ']' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 [2024-11-26 21:25:18.554298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.606 [2024-11-26 21:25:18.554320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.606 [2024-11-26 21:25:18.554404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.606 [2024-11-26 21:25:18.554457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.606 [2024-11-26 21:25:18.554473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 [2024-11-26 21:25:18.694078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:00.606 [2024-11-26 21:25:18.696158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:00.606 [2024-11-26 21:25:18.696242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:00.606 [2024-11-26 21:25:18.696290] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:00.606 [2024-11-26 21:25:18.696304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.606 [2024-11-26 21:25:18.696314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:00.606 request: 00:18:00.606 { 00:18:00.606 "name": "raid_bdev1", 00:18:00.606 "raid_level": "raid1", 00:18:00.606 "base_bdevs": [ 00:18:00.606 "malloc1", 00:18:00.606 "malloc2" 00:18:00.606 ], 00:18:00.606 "superblock": false, 00:18:00.606 "method": "bdev_raid_create", 00:18:00.606 "req_id": 1 00:18:00.606 } 00:18:00.606 Got JSON-RPC error response 00:18:00.606 response: 00:18:00.606 { 00:18:00.606 "code": -17, 00:18:00.606 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:00.606 } 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 [2024-11-26 21:25:18.745989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.606 [2024-11-26 21:25:18.746033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.606 [2024-11-26 21:25:18.746050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:00.606 [2024-11-26 21:25:18.746061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.606 [2024-11-26 21:25:18.748118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.606 [2024-11-26 21:25:18.748150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.606 [2024-11-26 21:25:18.748201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.606 [2024-11-26 21:25:18.748259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.606 pt1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.606 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.865 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.865 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.865 "name": "raid_bdev1", 00:18:00.865 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:00.865 "strip_size_kb": 0, 00:18:00.865 "state": "configuring", 00:18:00.865 "raid_level": "raid1", 00:18:00.865 "superblock": true, 00:18:00.865 "num_base_bdevs": 2, 00:18:00.865 "num_base_bdevs_discovered": 1, 00:18:00.865 "num_base_bdevs_operational": 2, 00:18:00.865 "base_bdevs_list": [ 00:18:00.865 { 00:18:00.865 "name": "pt1", 00:18:00.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.865 "is_configured": true, 00:18:00.865 "data_offset": 256, 00:18:00.865 "data_size": 7936 00:18:00.865 }, 00:18:00.865 { 00:18:00.865 "name": null, 00:18:00.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.865 "is_configured": false, 00:18:00.865 "data_offset": 256, 00:18:00.865 "data_size": 7936 00:18:00.865 } 00:18:00.865 ] 00:18:00.865 }' 00:18:00.865 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.865 21:25:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 [2024-11-26 21:25:19.221154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.124 [2024-11-26 21:25:19.221211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.124 [2024-11-26 21:25:19.221230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:01.124 [2024-11-26 21:25:19.221240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.124 [2024-11-26 21:25:19.221356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.124 [2024-11-26 21:25:19.221378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.124 [2024-11-26 21:25:19.221413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.124 [2024-11-26 21:25:19.221432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.124 [2024-11-26 21:25:19.221505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.124 [2024-11-26 21:25:19.221532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:01.124 [2024-11-26 21:25:19.221601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.124 [2024-11-26 21:25:19.221667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.124 [2024-11-26 21:25:19.221676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:01.124 [2024-11-26 21:25:19.221727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.124 pt2 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.124 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.382 "name": "raid_bdev1", 00:18:01.382 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:01.382 "strip_size_kb": 0, 00:18:01.382 "state": "online", 00:18:01.382 "raid_level": "raid1", 00:18:01.382 "superblock": true, 00:18:01.382 "num_base_bdevs": 2, 00:18:01.382 "num_base_bdevs_discovered": 2, 00:18:01.382 "num_base_bdevs_operational": 2, 00:18:01.382 "base_bdevs_list": [ 00:18:01.382 { 00:18:01.382 "name": "pt1", 00:18:01.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.382 "is_configured": true, 00:18:01.382 "data_offset": 256, 00:18:01.382 "data_size": 7936 00:18:01.382 }, 00:18:01.382 { 00:18:01.382 "name": "pt2", 00:18:01.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.382 "is_configured": true, 00:18:01.382 "data_offset": 256, 00:18:01.382 "data_size": 7936 00:18:01.382 } 00:18:01.382 ] 00:18:01.382 }' 00:18:01.382 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.382 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.641 [2024-11-26 21:25:19.672600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.641 "name": "raid_bdev1", 00:18:01.641 "aliases": [ 00:18:01.641 "7c94ff16-0180-41f5-b97c-4ba8c5661b40" 00:18:01.641 ], 00:18:01.641 "product_name": "Raid Volume", 00:18:01.641 "block_size": 4128, 00:18:01.641 "num_blocks": 7936, 00:18:01.641 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:01.641 "md_size": 32, 00:18:01.641 "md_interleave": true, 00:18:01.641 "dif_type": 0, 00:18:01.641 "assigned_rate_limits": { 00:18:01.641 "rw_ios_per_sec": 0, 00:18:01.641 "rw_mbytes_per_sec": 0, 00:18:01.641 "r_mbytes_per_sec": 0, 00:18:01.641 "w_mbytes_per_sec": 0 00:18:01.641 }, 00:18:01.641 "claimed": false, 00:18:01.641 "zoned": false, 00:18:01.641 "supported_io_types": { 00:18:01.641 "read": true, 00:18:01.641 "write": true, 00:18:01.641 "unmap": false, 00:18:01.641 "flush": false, 00:18:01.641 "reset": true, 00:18:01.641 "nvme_admin": false, 00:18:01.641 "nvme_io": false, 00:18:01.641 "nvme_io_md": false, 00:18:01.641 "write_zeroes": true, 00:18:01.641 "zcopy": false, 00:18:01.641 "get_zone_info": false, 00:18:01.641 "zone_management": false, 00:18:01.641 "zone_append": false, 00:18:01.641 "compare": false, 00:18:01.641 "compare_and_write": false, 00:18:01.641 "abort": false, 00:18:01.641 "seek_hole": false, 00:18:01.641 "seek_data": false, 00:18:01.641 "copy": false, 00:18:01.641 "nvme_iov_md": false 00:18:01.641 }, 00:18:01.641 "memory_domains": [ 00:18:01.641 { 00:18:01.641 "dma_device_id": "system", 00:18:01.641 "dma_device_type": 1 00:18:01.641 }, 00:18:01.641 { 00:18:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.641 "dma_device_type": 2 00:18:01.641 }, 00:18:01.641 { 00:18:01.641 "dma_device_id": "system", 00:18:01.641 "dma_device_type": 1 00:18:01.641 }, 00:18:01.641 { 00:18:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.641 "dma_device_type": 2 00:18:01.641 } 00:18:01.641 ], 00:18:01.641 "driver_specific": { 00:18:01.641 "raid": { 00:18:01.641 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:01.641 "strip_size_kb": 0, 00:18:01.641 "state": "online", 00:18:01.641 "raid_level": "raid1", 00:18:01.641 "superblock": true, 00:18:01.641 "num_base_bdevs": 2, 00:18:01.641 "num_base_bdevs_discovered": 2, 00:18:01.641 "num_base_bdevs_operational": 2, 00:18:01.641 "base_bdevs_list": [ 00:18:01.641 { 00:18:01.641 "name": "pt1", 00:18:01.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.641 "is_configured": true, 00:18:01.641 "data_offset": 256, 00:18:01.641 "data_size": 7936 00:18:01.641 }, 00:18:01.641 { 00:18:01.641 "name": "pt2", 00:18:01.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.641 "is_configured": true, 00:18:01.641 "data_offset": 256, 00:18:01.641 "data_size": 7936 00:18:01.641 } 00:18:01.641 ] 00:18:01.641 } 00:18:01.641 } 00:18:01.641 }' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.641 pt2' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.641 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:01.900 [2024-11-26 21:25:19.864456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7c94ff16-0180-41f5-b97c-4ba8c5661b40 '!=' 7c94ff16-0180-41f5-b97c-4ba8c5661b40 ']' 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 [2024-11-26 21:25:19.908148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.900 "name": "raid_bdev1", 00:18:01.900 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:01.900 "strip_size_kb": 0, 00:18:01.900 "state": "online", 00:18:01.900 "raid_level": "raid1", 00:18:01.900 "superblock": true, 00:18:01.900 "num_base_bdevs": 2, 00:18:01.900 "num_base_bdevs_discovered": 1, 00:18:01.900 "num_base_bdevs_operational": 1, 00:18:01.900 "base_bdevs_list": [ 00:18:01.900 { 00:18:01.900 "name": null, 00:18:01.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.900 "is_configured": false, 00:18:01.900 "data_offset": 0, 00:18:01.900 "data_size": 7936 00:18:01.900 }, 00:18:01.900 { 00:18:01.900 "name": "pt2", 00:18:01.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.900 "is_configured": true, 00:18:01.900 "data_offset": 256, 00:18:01.900 "data_size": 7936 00:18:01.900 } 00:18:01.900 ] 00:18:01.900 }' 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.900 21:25:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.467 [2024-11-26 21:25:20.339486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.467 [2024-11-26 21:25:20.339510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.467 [2024-11-26 21:25:20.339561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.467 [2024-11-26 21:25:20.339598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.467 [2024-11-26 21:25:20.339610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.467 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.467 [2024-11-26 21:25:20.391412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.467 [2024-11-26 21:25:20.391457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.467 [2024-11-26 21:25:20.391472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:02.467 [2024-11-26 21:25:20.391482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.467 [2024-11-26 21:25:20.393641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.467 [2024-11-26 21:25:20.393676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.467 [2024-11-26 21:25:20.393713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.467 [2024-11-26 21:25:20.393761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.467 [2024-11-26 21:25:20.393812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:02.467 [2024-11-26 21:25:20.393823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:02.468 [2024-11-26 21:25:20.393902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:02.468 [2024-11-26 21:25:20.393982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:02.468 [2024-11-26 21:25:20.393991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:02.468 [2024-11-26 21:25:20.394042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.468 pt2 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.468 "name": "raid_bdev1", 00:18:02.468 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:02.468 "strip_size_kb": 0, 00:18:02.468 "state": "online", 00:18:02.468 "raid_level": "raid1", 00:18:02.468 "superblock": true, 00:18:02.468 "num_base_bdevs": 2, 00:18:02.468 "num_base_bdevs_discovered": 1, 00:18:02.468 "num_base_bdevs_operational": 1, 00:18:02.468 "base_bdevs_list": [ 00:18:02.468 { 00:18:02.468 "name": null, 00:18:02.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.468 "is_configured": false, 00:18:02.468 "data_offset": 256, 00:18:02.468 "data_size": 7936 00:18:02.468 }, 00:18:02.468 { 00:18:02.468 "name": "pt2", 00:18:02.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.468 "is_configured": true, 00:18:02.468 "data_offset": 256, 00:18:02.468 "data_size": 7936 00:18:02.468 } 00:18:02.468 ] 00:18:02.468 }' 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.468 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.727 [2024-11-26 21:25:20.850578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.727 [2024-11-26 21:25:20.850606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.727 [2024-11-26 21:25:20.850652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.727 [2024-11-26 21:25:20.850691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.727 [2024-11-26 21:25:20.850700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.727 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.986 [2024-11-26 21:25:20.906534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.986 [2024-11-26 21:25:20.906591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.986 [2024-11-26 21:25:20.906609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:02.986 [2024-11-26 21:25:20.906617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.986 [2024-11-26 21:25:20.908646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.986 [2024-11-26 21:25:20.908687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.986 [2024-11-26 21:25:20.908735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.986 [2024-11-26 21:25:20.908782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.986 [2024-11-26 21:25:20.908867] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:02.986 [2024-11-26 21:25:20.908879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.986 [2024-11-26 21:25:20.908896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:02.986 [2024-11-26 21:25:20.908949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.986 [2024-11-26 21:25:20.909030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:02.986 [2024-11-26 21:25:20.909040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:02.986 [2024-11-26 21:25:20.909104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:02.986 [2024-11-26 21:25:20.909163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:02.986 [2024-11-26 21:25:20.909172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:02.986 [2024-11-26 21:25:20.909234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.986 pt1 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.986 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.986 "name": "raid_bdev1", 00:18:02.986 "uuid": "7c94ff16-0180-41f5-b97c-4ba8c5661b40", 00:18:02.986 "strip_size_kb": 0, 00:18:02.986 "state": "online", 00:18:02.986 "raid_level": "raid1", 00:18:02.986 "superblock": true, 00:18:02.986 "num_base_bdevs": 2, 00:18:02.987 "num_base_bdevs_discovered": 1, 00:18:02.987 "num_base_bdevs_operational": 1, 00:18:02.987 "base_bdevs_list": [ 00:18:02.987 { 00:18:02.987 "name": null, 00:18:02.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.987 "is_configured": false, 00:18:02.987 "data_offset": 256, 00:18:02.987 "data_size": 7936 00:18:02.987 }, 00:18:02.987 { 00:18:02.987 "name": "pt2", 00:18:02.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.987 "is_configured": true, 00:18:02.987 "data_offset": 256, 00:18:02.987 "data_size": 7936 00:18:02.987 } 00:18:02.987 ] 00:18:02.987 }' 00:18:02.987 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.987 21:25:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:03.346 [2024-11-26 21:25:21.389945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7c94ff16-0180-41f5-b97c-4ba8c5661b40 '!=' 7c94ff16-0180-41f5-b97c-4ba8c5661b40 ']' 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88501 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88501 ']' 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88501 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88501 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88501' 00:18:03.346 killing process with pid 88501 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88501 00:18:03.346 [2024-11-26 21:25:21.464233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.346 [2024-11-26 21:25:21.464332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.346 [2024-11-26 21:25:21.464391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.346 [2024-11-26 21:25:21.464412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:03.346 21:25:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88501 00:18:03.614 [2024-11-26 21:25:21.688562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.992 21:25:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:04.992 00:18:04.992 real 0m6.043s 00:18:04.992 user 0m9.006s 00:18:04.992 sys 0m1.160s 00:18:04.992 21:25:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.992 21:25:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.992 ************************************ 00:18:04.992 END TEST raid_superblock_test_md_interleaved 00:18:04.992 ************************************ 00:18:04.992 21:25:22 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:04.992 21:25:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:04.992 21:25:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.992 21:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.992 ************************************ 00:18:04.992 START TEST raid_rebuild_test_sb_md_interleaved 00:18:04.992 ************************************ 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:04.992 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88824 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88824 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88824 ']' 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.993 21:25:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.993 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:04.993 Zero copy mechanism will not be used. 00:18:04.993 [2024-11-26 21:25:23.057215] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:04.993 [2024-11-26 21:25:23.057334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88824 ] 00:18:05.252 [2024-11-26 21:25:23.234129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.252 [2024-11-26 21:25:23.363475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.510 [2024-11-26 21:25:23.597344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.510 [2024-11-26 21:25:23.597383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.768 BaseBdev1_malloc 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.768 [2024-11-26 21:25:23.914194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:05.768 [2024-11-26 21:25:23.914259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.768 [2024-11-26 21:25:23.914287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:05.768 [2024-11-26 21:25:23.914299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.768 [2024-11-26 21:25:23.916201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.768 [2024-11-26 21:25:23.916239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.768 BaseBdev1 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.768 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 BaseBdev2_malloc 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 [2024-11-26 21:25:23.975976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.027 [2024-11-26 21:25:23.976038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.027 [2024-11-26 21:25:23.976059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.027 [2024-11-26 21:25:23.976073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.027 [2024-11-26 21:25:23.978204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.027 [2024-11-26 21:25:23.978238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.027 BaseBdev2 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.027 21:25:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 spare_malloc 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 spare_delay 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 [2024-11-26 21:25:24.083645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.027 [2024-11-26 21:25:24.083700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.027 [2024-11-26 21:25:24.083721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.027 [2024-11-26 21:25:24.083732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.027 [2024-11-26 21:25:24.085816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.027 [2024-11-26 21:25:24.085852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.027 spare 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.027 [2024-11-26 21:25:24.095672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.027 [2024-11-26 21:25:24.097722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.027 [2024-11-26 21:25:24.097905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.027 [2024-11-26 21:25:24.097921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:06.027 [2024-11-26 21:25:24.098005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.027 [2024-11-26 21:25:24.098077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.027 [2024-11-26 21:25:24.098085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.027 [2024-11-26 21:25:24.098149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.027 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.028 "name": "raid_bdev1", 00:18:06.028 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:06.028 "strip_size_kb": 0, 00:18:06.028 "state": "online", 00:18:06.028 "raid_level": "raid1", 00:18:06.028 "superblock": true, 00:18:06.028 "num_base_bdevs": 2, 00:18:06.028 "num_base_bdevs_discovered": 2, 00:18:06.028 "num_base_bdevs_operational": 2, 00:18:06.028 "base_bdevs_list": [ 00:18:06.028 { 00:18:06.028 "name": "BaseBdev1", 00:18:06.028 "uuid": "b5726b79-6521-5366-a8e4-db8ce1902a68", 00:18:06.028 "is_configured": true, 00:18:06.028 "data_offset": 256, 00:18:06.028 "data_size": 7936 00:18:06.028 }, 00:18:06.028 { 00:18:06.028 "name": "BaseBdev2", 00:18:06.028 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:06.028 "is_configured": true, 00:18:06.028 "data_offset": 256, 00:18:06.028 "data_size": 7936 00:18:06.028 } 00:18:06.028 ] 00:18:06.028 }' 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.028 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 [2024-11-26 21:25:24.547117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 [2024-11-26 21:25:24.630696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.595 "name": "raid_bdev1", 00:18:06.595 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:06.595 "strip_size_kb": 0, 00:18:06.595 "state": "online", 00:18:06.595 "raid_level": "raid1", 00:18:06.595 "superblock": true, 00:18:06.595 "num_base_bdevs": 2, 00:18:06.595 "num_base_bdevs_discovered": 1, 00:18:06.595 "num_base_bdevs_operational": 1, 00:18:06.595 "base_bdevs_list": [ 00:18:06.595 { 00:18:06.595 "name": null, 00:18:06.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.595 "is_configured": false, 00:18:06.595 "data_offset": 0, 00:18:06.595 "data_size": 7936 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "name": "BaseBdev2", 00:18:06.595 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:06.595 "is_configured": true, 00:18:06.595 "data_offset": 256, 00:18:06.595 "data_size": 7936 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }' 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.595 21:25:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.162 21:25:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.162 21:25:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.162 21:25:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.162 [2024-11-26 21:25:25.038020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.162 [2024-11-26 21:25:25.057013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:07.162 21:25:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.162 21:25:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:07.162 [2024-11-26 21:25:25.059065] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.098 "name": "raid_bdev1", 00:18:08.098 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:08.098 "strip_size_kb": 0, 00:18:08.098 "state": "online", 00:18:08.098 "raid_level": "raid1", 00:18:08.098 "superblock": true, 00:18:08.098 "num_base_bdevs": 2, 00:18:08.098 "num_base_bdevs_discovered": 2, 00:18:08.098 "num_base_bdevs_operational": 2, 00:18:08.098 "process": { 00:18:08.098 "type": "rebuild", 00:18:08.098 "target": "spare", 00:18:08.098 "progress": { 00:18:08.098 "blocks": 2560, 00:18:08.098 "percent": 32 00:18:08.098 } 00:18:08.098 }, 00:18:08.098 "base_bdevs_list": [ 00:18:08.098 { 00:18:08.098 "name": "spare", 00:18:08.098 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:08.098 "is_configured": true, 00:18:08.098 "data_offset": 256, 00:18:08.098 "data_size": 7936 00:18:08.098 }, 00:18:08.098 { 00:18:08.098 "name": "BaseBdev2", 00:18:08.098 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:08.098 "is_configured": true, 00:18:08.098 "data_offset": 256, 00:18:08.098 "data_size": 7936 00:18:08.098 } 00:18:08.098 ] 00:18:08.098 }' 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.098 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.099 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:08.099 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.099 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.099 [2024-11-26 21:25:26.206287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.357 [2024-11-26 21:25:26.267839] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.357 [2024-11-26 21:25:26.267901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.357 [2024-11-26 21:25:26.267916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.358 [2024-11-26 21:25:26.267930] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.358 "name": "raid_bdev1", 00:18:08.358 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:08.358 "strip_size_kb": 0, 00:18:08.358 "state": "online", 00:18:08.358 "raid_level": "raid1", 00:18:08.358 "superblock": true, 00:18:08.358 "num_base_bdevs": 2, 00:18:08.358 "num_base_bdevs_discovered": 1, 00:18:08.358 "num_base_bdevs_operational": 1, 00:18:08.358 "base_bdevs_list": [ 00:18:08.358 { 00:18:08.358 "name": null, 00:18:08.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.358 "is_configured": false, 00:18:08.358 "data_offset": 0, 00:18:08.358 "data_size": 7936 00:18:08.358 }, 00:18:08.358 { 00:18:08.358 "name": "BaseBdev2", 00:18:08.358 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:08.358 "is_configured": true, 00:18:08.358 "data_offset": 256, 00:18:08.358 "data_size": 7936 00:18:08.358 } 00:18:08.358 ] 00:18:08.358 }' 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.358 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.616 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.616 "name": "raid_bdev1", 00:18:08.616 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:08.616 "strip_size_kb": 0, 00:18:08.616 "state": "online", 00:18:08.616 "raid_level": "raid1", 00:18:08.616 "superblock": true, 00:18:08.617 "num_base_bdevs": 2, 00:18:08.617 "num_base_bdevs_discovered": 1, 00:18:08.617 "num_base_bdevs_operational": 1, 00:18:08.617 "base_bdevs_list": [ 00:18:08.617 { 00:18:08.617 "name": null, 00:18:08.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.617 "is_configured": false, 00:18:08.617 "data_offset": 0, 00:18:08.617 "data_size": 7936 00:18:08.617 }, 00:18:08.617 { 00:18:08.617 "name": "BaseBdev2", 00:18:08.617 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:08.617 "is_configured": true, 00:18:08.617 "data_offset": 256, 00:18:08.617 "data_size": 7936 00:18:08.617 } 00:18:08.617 ] 00:18:08.617 }' 00:18:08.617 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.876 [2024-11-26 21:25:26.846828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.876 [2024-11-26 21:25:26.863507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.876 21:25:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:08.876 [2024-11-26 21:25:26.865629] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.811 "name": "raid_bdev1", 00:18:09.811 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:09.811 "strip_size_kb": 0, 00:18:09.811 "state": "online", 00:18:09.811 "raid_level": "raid1", 00:18:09.811 "superblock": true, 00:18:09.811 "num_base_bdevs": 2, 00:18:09.811 "num_base_bdevs_discovered": 2, 00:18:09.811 "num_base_bdevs_operational": 2, 00:18:09.811 "process": { 00:18:09.811 "type": "rebuild", 00:18:09.811 "target": "spare", 00:18:09.811 "progress": { 00:18:09.811 "blocks": 2560, 00:18:09.811 "percent": 32 00:18:09.811 } 00:18:09.811 }, 00:18:09.811 "base_bdevs_list": [ 00:18:09.811 { 00:18:09.811 "name": "spare", 00:18:09.811 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:09.811 "is_configured": true, 00:18:09.811 "data_offset": 256, 00:18:09.811 "data_size": 7936 00:18:09.811 }, 00:18:09.811 { 00:18:09.811 "name": "BaseBdev2", 00:18:09.811 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:09.811 "is_configured": true, 00:18:09.811 "data_offset": 256, 00:18:09.811 "data_size": 7936 00:18:09.811 } 00:18:09.811 ] 00:18:09.811 }' 00:18:09.811 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:10.070 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=729 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.070 21:25:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.070 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.070 "name": "raid_bdev1", 00:18:10.070 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:10.070 "strip_size_kb": 0, 00:18:10.070 "state": "online", 00:18:10.070 "raid_level": "raid1", 00:18:10.070 "superblock": true, 00:18:10.070 "num_base_bdevs": 2, 00:18:10.070 "num_base_bdevs_discovered": 2, 00:18:10.070 "num_base_bdevs_operational": 2, 00:18:10.070 "process": { 00:18:10.070 "type": "rebuild", 00:18:10.070 "target": "spare", 00:18:10.070 "progress": { 00:18:10.070 "blocks": 2816, 00:18:10.070 "percent": 35 00:18:10.070 } 00:18:10.070 }, 00:18:10.070 "base_bdevs_list": [ 00:18:10.070 { 00:18:10.070 "name": "spare", 00:18:10.070 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:10.070 "is_configured": true, 00:18:10.071 "data_offset": 256, 00:18:10.071 "data_size": 7936 00:18:10.071 }, 00:18:10.071 { 00:18:10.071 "name": "BaseBdev2", 00:18:10.071 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:10.071 "is_configured": true, 00:18:10.071 "data_offset": 256, 00:18:10.071 "data_size": 7936 00:18:10.071 } 00:18:10.071 ] 00:18:10.071 }' 00:18:10.071 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.071 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.071 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.071 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.071 21:25:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.014 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.272 "name": "raid_bdev1", 00:18:11.272 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:11.272 "strip_size_kb": 0, 00:18:11.272 "state": "online", 00:18:11.272 "raid_level": "raid1", 00:18:11.272 "superblock": true, 00:18:11.272 "num_base_bdevs": 2, 00:18:11.272 "num_base_bdevs_discovered": 2, 00:18:11.272 "num_base_bdevs_operational": 2, 00:18:11.272 "process": { 00:18:11.272 "type": "rebuild", 00:18:11.272 "target": "spare", 00:18:11.272 "progress": { 00:18:11.272 "blocks": 5632, 00:18:11.272 "percent": 70 00:18:11.272 } 00:18:11.272 }, 00:18:11.272 "base_bdevs_list": [ 00:18:11.272 { 00:18:11.272 "name": "spare", 00:18:11.272 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:11.272 "is_configured": true, 00:18:11.272 "data_offset": 256, 00:18:11.272 "data_size": 7936 00:18:11.272 }, 00:18:11.272 { 00:18:11.272 "name": "BaseBdev2", 00:18:11.272 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:11.272 "is_configured": true, 00:18:11.272 "data_offset": 256, 00:18:11.272 "data_size": 7936 00:18:11.272 } 00:18:11.272 ] 00:18:11.272 }' 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.272 21:25:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.839 [2024-11-26 21:25:29.987580] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:11.839 [2024-11-26 21:25:29.987701] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:11.839 [2024-11-26 21:25:29.987826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.407 "name": "raid_bdev1", 00:18:12.407 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:12.407 "strip_size_kb": 0, 00:18:12.407 "state": "online", 00:18:12.407 "raid_level": "raid1", 00:18:12.407 "superblock": true, 00:18:12.407 "num_base_bdevs": 2, 00:18:12.407 "num_base_bdevs_discovered": 2, 00:18:12.407 "num_base_bdevs_operational": 2, 00:18:12.407 "base_bdevs_list": [ 00:18:12.407 { 00:18:12.407 "name": "spare", 00:18:12.407 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:12.407 "is_configured": true, 00:18:12.407 "data_offset": 256, 00:18:12.407 "data_size": 7936 00:18:12.407 }, 00:18:12.407 { 00:18:12.407 "name": "BaseBdev2", 00:18:12.407 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:12.407 "is_configured": true, 00:18:12.407 "data_offset": 256, 00:18:12.407 "data_size": 7936 00:18:12.407 } 00:18:12.407 ] 00:18:12.407 }' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.407 "name": "raid_bdev1", 00:18:12.407 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:12.407 "strip_size_kb": 0, 00:18:12.407 "state": "online", 00:18:12.407 "raid_level": "raid1", 00:18:12.407 "superblock": true, 00:18:12.407 "num_base_bdevs": 2, 00:18:12.407 "num_base_bdevs_discovered": 2, 00:18:12.407 "num_base_bdevs_operational": 2, 00:18:12.407 "base_bdevs_list": [ 00:18:12.407 { 00:18:12.407 "name": "spare", 00:18:12.407 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:12.407 "is_configured": true, 00:18:12.407 "data_offset": 256, 00:18:12.407 "data_size": 7936 00:18:12.407 }, 00:18:12.407 { 00:18:12.407 "name": "BaseBdev2", 00:18:12.407 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:12.407 "is_configured": true, 00:18:12.407 "data_offset": 256, 00:18:12.407 "data_size": 7936 00:18:12.407 } 00:18:12.407 ] 00:18:12.407 }' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.407 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.666 "name": "raid_bdev1", 00:18:12.666 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:12.666 "strip_size_kb": 0, 00:18:12.666 "state": "online", 00:18:12.666 "raid_level": "raid1", 00:18:12.666 "superblock": true, 00:18:12.666 "num_base_bdevs": 2, 00:18:12.666 "num_base_bdevs_discovered": 2, 00:18:12.666 "num_base_bdevs_operational": 2, 00:18:12.666 "base_bdevs_list": [ 00:18:12.666 { 00:18:12.666 "name": "spare", 00:18:12.666 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:12.666 "is_configured": true, 00:18:12.666 "data_offset": 256, 00:18:12.666 "data_size": 7936 00:18:12.666 }, 00:18:12.666 { 00:18:12.666 "name": "BaseBdev2", 00:18:12.666 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:12.666 "is_configured": true, 00:18:12.666 "data_offset": 256, 00:18:12.666 "data_size": 7936 00:18:12.666 } 00:18:12.666 ] 00:18:12.666 }' 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.666 21:25:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.924 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.924 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.924 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.924 [2024-11-26 21:25:31.078834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.924 [2024-11-26 21:25:31.078872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.924 [2024-11-26 21:25:31.078977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.924 [2024-11-26 21:25:31.079049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.924 [2024-11-26 21:25:31.079061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.183 [2024-11-26 21:25:31.146708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.183 [2024-11-26 21:25:31.146763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.183 [2024-11-26 21:25:31.146789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:13.183 [2024-11-26 21:25:31.146797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.183 [2024-11-26 21:25:31.149002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.183 [2024-11-26 21:25:31.149036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.183 [2024-11-26 21:25:31.149089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:13.183 [2024-11-26 21:25:31.149149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.183 [2024-11-26 21:25:31.149260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.183 spare 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.183 [2024-11-26 21:25:31.249151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:13.183 [2024-11-26 21:25:31.249178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:13.183 [2024-11-26 21:25:31.249267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:13.183 [2024-11-26 21:25:31.249346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:13.183 [2024-11-26 21:25:31.249355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:13.183 [2024-11-26 21:25:31.249430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.183 "name": "raid_bdev1", 00:18:13.183 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:13.183 "strip_size_kb": 0, 00:18:13.183 "state": "online", 00:18:13.183 "raid_level": "raid1", 00:18:13.183 "superblock": true, 00:18:13.183 "num_base_bdevs": 2, 00:18:13.183 "num_base_bdevs_discovered": 2, 00:18:13.183 "num_base_bdevs_operational": 2, 00:18:13.183 "base_bdevs_list": [ 00:18:13.183 { 00:18:13.183 "name": "spare", 00:18:13.183 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:13.183 "is_configured": true, 00:18:13.183 "data_offset": 256, 00:18:13.183 "data_size": 7936 00:18:13.183 }, 00:18:13.183 { 00:18:13.183 "name": "BaseBdev2", 00:18:13.183 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:13.183 "is_configured": true, 00:18:13.183 "data_offset": 256, 00:18:13.183 "data_size": 7936 00:18:13.183 } 00:18:13.183 ] 00:18:13.183 }' 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.183 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.750 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.750 "name": "raid_bdev1", 00:18:13.750 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:13.750 "strip_size_kb": 0, 00:18:13.750 "state": "online", 00:18:13.750 "raid_level": "raid1", 00:18:13.750 "superblock": true, 00:18:13.750 "num_base_bdevs": 2, 00:18:13.750 "num_base_bdevs_discovered": 2, 00:18:13.751 "num_base_bdevs_operational": 2, 00:18:13.751 "base_bdevs_list": [ 00:18:13.751 { 00:18:13.751 "name": "spare", 00:18:13.751 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:13.751 "is_configured": true, 00:18:13.751 "data_offset": 256, 00:18:13.751 "data_size": 7936 00:18:13.751 }, 00:18:13.751 { 00:18:13.751 "name": "BaseBdev2", 00:18:13.751 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:13.751 "is_configured": true, 00:18:13.751 "data_offset": 256, 00:18:13.751 "data_size": 7936 00:18:13.751 } 00:18:13.751 ] 00:18:13.751 }' 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.751 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.010 [2024-11-26 21:25:31.921464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.010 "name": "raid_bdev1", 00:18:14.010 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:14.010 "strip_size_kb": 0, 00:18:14.010 "state": "online", 00:18:14.010 "raid_level": "raid1", 00:18:14.010 "superblock": true, 00:18:14.010 "num_base_bdevs": 2, 00:18:14.010 "num_base_bdevs_discovered": 1, 00:18:14.010 "num_base_bdevs_operational": 1, 00:18:14.010 "base_bdevs_list": [ 00:18:14.010 { 00:18:14.010 "name": null, 00:18:14.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.010 "is_configured": false, 00:18:14.010 "data_offset": 0, 00:18:14.010 "data_size": 7936 00:18:14.010 }, 00:18:14.010 { 00:18:14.010 "name": "BaseBdev2", 00:18:14.010 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:14.010 "is_configured": true, 00:18:14.010 "data_offset": 256, 00:18:14.010 "data_size": 7936 00:18:14.010 } 00:18:14.010 ] 00:18:14.010 }' 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.010 21:25:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.269 21:25:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.269 21:25:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.269 21:25:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.269 [2024-11-26 21:25:32.360719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.269 [2024-11-26 21:25:32.360920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:14.269 [2024-11-26 21:25:32.361004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:14.269 [2024-11-26 21:25:32.361098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.269 [2024-11-26 21:25:32.378383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:14.269 21:25:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.269 21:25:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:14.269 [2024-11-26 21:25:32.380482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.645 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.646 "name": "raid_bdev1", 00:18:15.646 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:15.646 "strip_size_kb": 0, 00:18:15.646 "state": "online", 00:18:15.646 "raid_level": "raid1", 00:18:15.646 "superblock": true, 00:18:15.646 "num_base_bdevs": 2, 00:18:15.646 "num_base_bdevs_discovered": 2, 00:18:15.646 "num_base_bdevs_operational": 2, 00:18:15.646 "process": { 00:18:15.646 "type": "rebuild", 00:18:15.646 "target": "spare", 00:18:15.646 "progress": { 00:18:15.646 "blocks": 2560, 00:18:15.646 "percent": 32 00:18:15.646 } 00:18:15.646 }, 00:18:15.646 "base_bdevs_list": [ 00:18:15.646 { 00:18:15.646 "name": "spare", 00:18:15.646 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:15.646 "is_configured": true, 00:18:15.646 "data_offset": 256, 00:18:15.646 "data_size": 7936 00:18:15.646 }, 00:18:15.646 { 00:18:15.646 "name": "BaseBdev2", 00:18:15.646 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:15.646 "is_configured": true, 00:18:15.646 "data_offset": 256, 00:18:15.646 "data_size": 7936 00:18:15.646 } 00:18:15.646 ] 00:18:15.646 }' 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.646 [2024-11-26 21:25:33.523691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.646 [2024-11-26 21:25:33.589258] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.646 [2024-11-26 21:25:33.589319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.646 [2024-11-26 21:25:33.589333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.646 [2024-11-26 21:25:33.589343] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.646 "name": "raid_bdev1", 00:18:15.646 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:15.646 "strip_size_kb": 0, 00:18:15.646 "state": "online", 00:18:15.646 "raid_level": "raid1", 00:18:15.646 "superblock": true, 00:18:15.646 "num_base_bdevs": 2, 00:18:15.646 "num_base_bdevs_discovered": 1, 00:18:15.646 "num_base_bdevs_operational": 1, 00:18:15.646 "base_bdevs_list": [ 00:18:15.646 { 00:18:15.646 "name": null, 00:18:15.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.646 "is_configured": false, 00:18:15.646 "data_offset": 0, 00:18:15.646 "data_size": 7936 00:18:15.646 }, 00:18:15.646 { 00:18:15.646 "name": "BaseBdev2", 00:18:15.646 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:15.646 "is_configured": true, 00:18:15.646 "data_offset": 256, 00:18:15.646 "data_size": 7936 00:18:15.646 } 00:18:15.646 ] 00:18:15.646 }' 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.646 21:25:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.905 21:25:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.905 21:25:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.905 21:25:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.905 [2024-11-26 21:25:34.043817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.905 [2024-11-26 21:25:34.043926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.905 [2024-11-26 21:25:34.043978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:15.905 [2024-11-26 21:25:34.044009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.905 [2024-11-26 21:25:34.044255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.905 [2024-11-26 21:25:34.044306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.905 [2024-11-26 21:25:34.044377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:15.905 [2024-11-26 21:25:34.044414] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.905 [2024-11-26 21:25:34.044453] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.905 [2024-11-26 21:25:34.044521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.164 [2024-11-26 21:25:34.061362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:16.164 spare 00:18:16.164 21:25:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.164 21:25:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:16.164 [2024-11-26 21:25:34.063619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.101 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.101 "name": "raid_bdev1", 00:18:17.101 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:17.101 "strip_size_kb": 0, 00:18:17.101 "state": "online", 00:18:17.101 "raid_level": "raid1", 00:18:17.101 "superblock": true, 00:18:17.101 "num_base_bdevs": 2, 00:18:17.101 "num_base_bdevs_discovered": 2, 00:18:17.101 "num_base_bdevs_operational": 2, 00:18:17.101 "process": { 00:18:17.101 "type": "rebuild", 00:18:17.101 "target": "spare", 00:18:17.101 "progress": { 00:18:17.101 "blocks": 2560, 00:18:17.101 "percent": 32 00:18:17.101 } 00:18:17.101 }, 00:18:17.101 "base_bdevs_list": [ 00:18:17.101 { 00:18:17.101 "name": "spare", 00:18:17.101 "uuid": "0b296a93-1c7c-5d12-8f37-729e0703cffa", 00:18:17.101 "is_configured": true, 00:18:17.101 "data_offset": 256, 00:18:17.101 "data_size": 7936 00:18:17.101 }, 00:18:17.101 { 00:18:17.101 "name": "BaseBdev2", 00:18:17.101 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:17.101 "is_configured": true, 00:18:17.101 "data_offset": 256, 00:18:17.102 "data_size": 7936 00:18:17.102 } 00:18:17.102 ] 00:18:17.102 }' 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.102 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.102 [2024-11-26 21:25:35.198578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.360 [2024-11-26 21:25:35.272147] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:17.361 [2024-11-26 21:25:35.272211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.361 [2024-11-26 21:25:35.272229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.361 [2024-11-26 21:25:35.272236] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.361 "name": "raid_bdev1", 00:18:17.361 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:17.361 "strip_size_kb": 0, 00:18:17.361 "state": "online", 00:18:17.361 "raid_level": "raid1", 00:18:17.361 "superblock": true, 00:18:17.361 "num_base_bdevs": 2, 00:18:17.361 "num_base_bdevs_discovered": 1, 00:18:17.361 "num_base_bdevs_operational": 1, 00:18:17.361 "base_bdevs_list": [ 00:18:17.361 { 00:18:17.361 "name": null, 00:18:17.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.361 "is_configured": false, 00:18:17.361 "data_offset": 0, 00:18:17.361 "data_size": 7936 00:18:17.361 }, 00:18:17.361 { 00:18:17.361 "name": "BaseBdev2", 00:18:17.361 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:17.361 "is_configured": true, 00:18:17.361 "data_offset": 256, 00:18:17.361 "data_size": 7936 00:18:17.361 } 00:18:17.361 ] 00:18:17.361 }' 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.361 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.620 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.620 "name": "raid_bdev1", 00:18:17.620 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:17.620 "strip_size_kb": 0, 00:18:17.620 "state": "online", 00:18:17.620 "raid_level": "raid1", 00:18:17.620 "superblock": true, 00:18:17.620 "num_base_bdevs": 2, 00:18:17.620 "num_base_bdevs_discovered": 1, 00:18:17.620 "num_base_bdevs_operational": 1, 00:18:17.620 "base_bdevs_list": [ 00:18:17.620 { 00:18:17.620 "name": null, 00:18:17.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.620 "is_configured": false, 00:18:17.620 "data_offset": 0, 00:18:17.620 "data_size": 7936 00:18:17.620 }, 00:18:17.620 { 00:18:17.620 "name": "BaseBdev2", 00:18:17.620 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:17.620 "is_configured": true, 00:18:17.620 "data_offset": 256, 00:18:17.620 "data_size": 7936 00:18:17.620 } 00:18:17.620 ] 00:18:17.620 }' 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.878 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.878 [2024-11-26 21:25:35.890557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.878 [2024-11-26 21:25:35.890611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.878 [2024-11-26 21:25:35.890636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:17.878 [2024-11-26 21:25:35.890645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.878 [2024-11-26 21:25:35.890835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.878 [2024-11-26 21:25:35.890849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.878 [2024-11-26 21:25:35.890898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:17.878 [2024-11-26 21:25:35.890911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.878 [2024-11-26 21:25:35.890922] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:17.879 [2024-11-26 21:25:35.890933] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:17.879 BaseBdev1 00:18:17.879 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.879 21:25:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.814 "name": "raid_bdev1", 00:18:18.814 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:18.814 "strip_size_kb": 0, 00:18:18.814 "state": "online", 00:18:18.814 "raid_level": "raid1", 00:18:18.814 "superblock": true, 00:18:18.814 "num_base_bdevs": 2, 00:18:18.814 "num_base_bdevs_discovered": 1, 00:18:18.814 "num_base_bdevs_operational": 1, 00:18:18.814 "base_bdevs_list": [ 00:18:18.814 { 00:18:18.814 "name": null, 00:18:18.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.814 "is_configured": false, 00:18:18.814 "data_offset": 0, 00:18:18.814 "data_size": 7936 00:18:18.814 }, 00:18:18.814 { 00:18:18.814 "name": "BaseBdev2", 00:18:18.814 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:18.814 "is_configured": true, 00:18:18.814 "data_offset": 256, 00:18:18.814 "data_size": 7936 00:18:18.814 } 00:18:18.814 ] 00:18:18.814 }' 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.814 21:25:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.382 "name": "raid_bdev1", 00:18:19.382 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:19.382 "strip_size_kb": 0, 00:18:19.382 "state": "online", 00:18:19.382 "raid_level": "raid1", 00:18:19.382 "superblock": true, 00:18:19.382 "num_base_bdevs": 2, 00:18:19.382 "num_base_bdevs_discovered": 1, 00:18:19.382 "num_base_bdevs_operational": 1, 00:18:19.382 "base_bdevs_list": [ 00:18:19.382 { 00:18:19.382 "name": null, 00:18:19.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.382 "is_configured": false, 00:18:19.382 "data_offset": 0, 00:18:19.382 "data_size": 7936 00:18:19.382 }, 00:18:19.382 { 00:18:19.382 "name": "BaseBdev2", 00:18:19.382 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:19.382 "is_configured": true, 00:18:19.382 "data_offset": 256, 00:18:19.382 "data_size": 7936 00:18:19.382 } 00:18:19.382 ] 00:18:19.382 }' 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.382 [2024-11-26 21:25:37.519898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.382 [2024-11-26 21:25:37.520053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.382 [2024-11-26 21:25:37.520071] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:19.382 request: 00:18:19.382 { 00:18:19.382 "base_bdev": "BaseBdev1", 00:18:19.382 "raid_bdev": "raid_bdev1", 00:18:19.382 "method": "bdev_raid_add_base_bdev", 00:18:19.382 "req_id": 1 00:18:19.382 } 00:18:19.382 Got JSON-RPC error response 00:18:19.382 response: 00:18:19.382 { 00:18:19.382 "code": -22, 00:18:19.382 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:19.382 } 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.382 21:25:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.759 "name": "raid_bdev1", 00:18:20.759 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:20.759 "strip_size_kb": 0, 00:18:20.759 "state": "online", 00:18:20.759 "raid_level": "raid1", 00:18:20.759 "superblock": true, 00:18:20.759 "num_base_bdevs": 2, 00:18:20.759 "num_base_bdevs_discovered": 1, 00:18:20.759 "num_base_bdevs_operational": 1, 00:18:20.759 "base_bdevs_list": [ 00:18:20.759 { 00:18:20.759 "name": null, 00:18:20.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.759 "is_configured": false, 00:18:20.759 "data_offset": 0, 00:18:20.759 "data_size": 7936 00:18:20.759 }, 00:18:20.759 { 00:18:20.759 "name": "BaseBdev2", 00:18:20.759 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:20.759 "is_configured": true, 00:18:20.759 "data_offset": 256, 00:18:20.759 "data_size": 7936 00:18:20.759 } 00:18:20.759 ] 00:18:20.759 }' 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.759 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.018 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.018 "name": "raid_bdev1", 00:18:21.018 "uuid": "11672c6d-b405-4797-a31c-160cd74099ba", 00:18:21.018 "strip_size_kb": 0, 00:18:21.018 "state": "online", 00:18:21.018 "raid_level": "raid1", 00:18:21.018 "superblock": true, 00:18:21.018 "num_base_bdevs": 2, 00:18:21.018 "num_base_bdevs_discovered": 1, 00:18:21.018 "num_base_bdevs_operational": 1, 00:18:21.018 "base_bdevs_list": [ 00:18:21.018 { 00:18:21.018 "name": null, 00:18:21.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.018 "is_configured": false, 00:18:21.018 "data_offset": 0, 00:18:21.018 "data_size": 7936 00:18:21.018 }, 00:18:21.018 { 00:18:21.018 "name": "BaseBdev2", 00:18:21.018 "uuid": "aa07d99d-c3b7-5a80-9645-fe2a427472be", 00:18:21.018 "is_configured": true, 00:18:21.018 "data_offset": 256, 00:18:21.018 "data_size": 7936 00:18:21.018 } 00:18:21.018 ] 00:18:21.018 }' 00:18:21.018 21:25:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88824 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88824 ']' 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88824 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.018 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88824 00:18:21.018 killing process with pid 88824 00:18:21.018 Received shutdown signal, test time was about 60.000000 seconds 00:18:21.018 00:18:21.018 Latency(us) 00:18:21.018 [2024-11-26T21:25:39.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.018 [2024-11-26T21:25:39.174Z] =================================================================================================================== 00:18:21.018 [2024-11-26T21:25:39.174Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.019 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.019 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.019 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88824' 00:18:21.019 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88824 00:18:21.019 [2024-11-26 21:25:39.083931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.019 [2024-11-26 21:25:39.084051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.019 [2024-11-26 21:25:39.084095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.019 [2024-11-26 21:25:39.084107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:21.019 21:25:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88824 00:18:21.276 [2024-11-26 21:25:39.393588] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.654 21:25:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:22.654 00:18:22.654 real 0m17.602s 00:18:22.654 user 0m22.836s 00:18:22.654 sys 0m1.796s 00:18:22.654 21:25:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.654 ************************************ 00:18:22.654 END TEST raid_rebuild_test_sb_md_interleaved 00:18:22.654 ************************************ 00:18:22.654 21:25:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.654 21:25:40 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:22.654 21:25:40 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:22.654 21:25:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88824 ']' 00:18:22.654 21:25:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88824 00:18:22.654 21:25:40 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:22.654 00:18:22.654 real 11m52.060s 00:18:22.654 user 15m56.938s 00:18:22.654 sys 1m52.812s 00:18:22.654 ************************************ 00:18:22.654 END TEST bdev_raid 00:18:22.654 ************************************ 00:18:22.654 21:25:40 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.654 21:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.654 21:25:40 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:22.654 21:25:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.654 21:25:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.654 21:25:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.654 ************************************ 00:18:22.654 START TEST spdkcli_raid 00:18:22.654 ************************************ 00:18:22.654 21:25:40 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:22.915 * Looking for test storage... 00:18:22.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.915 21:25:40 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:22.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.915 --rc genhtml_branch_coverage=1 00:18:22.915 --rc genhtml_function_coverage=1 00:18:22.915 --rc genhtml_legend=1 00:18:22.915 --rc geninfo_all_blocks=1 00:18:22.915 --rc geninfo_unexecuted_blocks=1 00:18:22.915 00:18:22.915 ' 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:22.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.915 --rc genhtml_branch_coverage=1 00:18:22.915 --rc genhtml_function_coverage=1 00:18:22.915 --rc genhtml_legend=1 00:18:22.915 --rc geninfo_all_blocks=1 00:18:22.915 --rc geninfo_unexecuted_blocks=1 00:18:22.915 00:18:22.915 ' 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:22.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.915 --rc genhtml_branch_coverage=1 00:18:22.915 --rc genhtml_function_coverage=1 00:18:22.915 --rc genhtml_legend=1 00:18:22.915 --rc geninfo_all_blocks=1 00:18:22.915 --rc geninfo_unexecuted_blocks=1 00:18:22.915 00:18:22.915 ' 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:22.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.915 --rc genhtml_branch_coverage=1 00:18:22.915 --rc genhtml_function_coverage=1 00:18:22.915 --rc genhtml_legend=1 00:18:22.915 --rc geninfo_all_blocks=1 00:18:22.915 --rc geninfo_unexecuted_blocks=1 00:18:22.915 00:18:22.915 ' 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:22.915 21:25:40 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89501 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:22.915 21:25:40 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89501 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89501 ']' 00:18:22.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.915 21:25:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.175 [2024-11-26 21:25:41.096260] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:23.175 [2024-11-26 21:25:41.096848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89501 ] 00:18:23.175 [2024-11-26 21:25:41.297502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:23.434 [2024-11-26 21:25:41.435570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.434 [2024-11-26 21:25:41.435610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.371 21:25:42 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.371 21:25:42 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:24.371 21:25:42 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:24.371 21:25:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.371 21:25:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.371 21:25:42 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:24.371 21:25:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.371 21:25:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.371 21:25:42 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:24.371 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:24.371 ' 00:18:26.275 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:26.275 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:26.275 21:25:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:26.275 21:25:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.275 21:25:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.275 21:25:44 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:26.275 21:25:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.275 21:25:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.275 21:25:44 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:26.275 ' 00:18:27.210 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:27.210 21:25:45 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:27.210 21:25:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.210 21:25:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.468 21:25:45 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:27.468 21:25:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.468 21:25:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.468 21:25:45 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:27.468 21:25:45 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:28.033 21:25:45 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:28.033 21:25:45 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:28.033 21:25:45 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:28.033 21:25:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.033 21:25:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.033 21:25:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:28.033 21:25:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.033 21:25:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.033 21:25:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:28.033 ' 00:18:28.965 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:28.965 21:25:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:28.965 21:25:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.965 21:25:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.223 21:25:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:29.223 21:25:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.223 21:25:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.223 21:25:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:29.223 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:29.223 ' 00:18:30.631 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:30.631 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:30.631 21:25:48 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.631 21:25:48 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89501 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89501 ']' 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89501 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89501 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89501' 00:18:30.631 killing process with pid 89501 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89501 00:18:30.631 21:25:48 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89501 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89501 ']' 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89501 00:18:33.172 21:25:51 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89501 ']' 00:18:33.172 21:25:51 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89501 00:18:33.172 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89501) - No such process 00:18:33.172 21:25:51 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89501 is not found' 00:18:33.172 Process with pid 89501 is not found 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:33.172 21:25:51 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:33.172 ************************************ 00:18:33.172 END TEST spdkcli_raid 00:18:33.172 ************************************ 00:18:33.172 00:18:33.172 real 0m10.548s 00:18:33.172 user 0m21.410s 00:18:33.172 sys 0m1.371s 00:18:33.172 21:25:51 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.172 21:25:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 21:25:51 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:33.433 21:25:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.433 21:25:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.433 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 ************************************ 00:18:33.433 START TEST blockdev_raid5f 00:18:33.433 ************************************ 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:33.433 * Looking for test storage... 00:18:33.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.433 21:25:51 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:33.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.433 --rc genhtml_branch_coverage=1 00:18:33.433 --rc genhtml_function_coverage=1 00:18:33.433 --rc genhtml_legend=1 00:18:33.433 --rc geninfo_all_blocks=1 00:18:33.433 --rc geninfo_unexecuted_blocks=1 00:18:33.433 00:18:33.433 ' 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:33.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.433 --rc genhtml_branch_coverage=1 00:18:33.433 --rc genhtml_function_coverage=1 00:18:33.433 --rc genhtml_legend=1 00:18:33.433 --rc geninfo_all_blocks=1 00:18:33.433 --rc geninfo_unexecuted_blocks=1 00:18:33.433 00:18:33.433 ' 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:33.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.433 --rc genhtml_branch_coverage=1 00:18:33.433 --rc genhtml_function_coverage=1 00:18:33.433 --rc genhtml_legend=1 00:18:33.433 --rc geninfo_all_blocks=1 00:18:33.433 --rc geninfo_unexecuted_blocks=1 00:18:33.433 00:18:33.433 ' 00:18:33.433 21:25:51 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:33.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.433 --rc genhtml_branch_coverage=1 00:18:33.433 --rc genhtml_function_coverage=1 00:18:33.433 --rc genhtml_legend=1 00:18:33.433 --rc geninfo_all_blocks=1 00:18:33.433 --rc geninfo_unexecuted_blocks=1 00:18:33.433 00:18:33.433 ' 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:33.433 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:33.693 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:33.693 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:33.693 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:33.693 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:33.693 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:33.693 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89791 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:33.694 21:25:51 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89791 00:18:33.694 21:25:51 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89791 ']' 00:18:33.694 21:25:51 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.694 21:25:51 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.694 21:25:51 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.694 21:25:51 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.694 21:25:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:33.694 [2024-11-26 21:25:51.697252] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:33.694 [2024-11-26 21:25:51.697426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89791 ] 00:18:33.953 [2024-11-26 21:25:51.870662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.953 [2024-11-26 21:25:52.005455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.892 21:25:53 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.892 21:25:53 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:34.892 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:34.892 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:34.892 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:34.892 21:25:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.892 21:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.151 Malloc0 00:18:35.151 Malloc1 00:18:35.151 Malloc2 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:35.151 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.151 21:25:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7c69e50e-d076-423c-994c-659c7a679435"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7c69e50e-d076-423c-994c-659c7a679435",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7c69e50e-d076-423c-994c-659c7a679435",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6bd505b5-8605-44aa-b457-b86d65651d43",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d3ac02f6-f33a-4ee1-be9b-f4e67621beb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a120e18c-3701-49c1-a1eb-10534552b8b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:35.411 21:25:53 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89791 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89791 ']' 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89791 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89791 00:18:35.411 killing process with pid 89791 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.411 21:25:53 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89791' 00:18:35.412 21:25:53 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89791 00:18:35.412 21:25:53 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89791 00:18:38.702 21:25:56 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:38.702 21:25:56 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:38.702 21:25:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:38.702 21:25:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.702 21:25:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.702 ************************************ 00:18:38.702 START TEST bdev_hello_world 00:18:38.702 ************************************ 00:18:38.702 21:25:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:38.702 [2024-11-26 21:25:56.274498] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:38.702 [2024-11-26 21:25:56.274621] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89860 ] 00:18:38.702 [2024-11-26 21:25:56.452192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.702 [2024-11-26 21:25:56.584202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.272 [2024-11-26 21:25:57.182450] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:39.272 [2024-11-26 21:25:57.182501] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:39.272 [2024-11-26 21:25:57.182518] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:39.272 [2024-11-26 21:25:57.183006] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:39.272 [2024-11-26 21:25:57.183150] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:39.272 [2024-11-26 21:25:57.183166] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:39.272 [2024-11-26 21:25:57.183211] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:39.272 00:18:39.272 [2024-11-26 21:25:57.183227] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:40.651 00:18:40.651 real 0m2.454s 00:18:40.651 user 0m1.972s 00:18:40.651 sys 0m0.359s 00:18:40.651 ************************************ 00:18:40.651 END TEST bdev_hello_world 00:18:40.651 ************************************ 00:18:40.651 21:25:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.651 21:25:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:40.651 21:25:58 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:40.651 21:25:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.651 21:25:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.651 21:25:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:40.651 ************************************ 00:18:40.651 START TEST bdev_bounds 00:18:40.651 ************************************ 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89913 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:40.651 Process bdevio pid: 89913 00:18:40.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89913' 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89913 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89913 ']' 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.651 21:25:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:40.651 [2024-11-26 21:25:58.799067] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:40.651 [2024-11-26 21:25:58.799645] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89913 ] 00:18:40.910 [2024-11-26 21:25:58.973241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:41.170 [2024-11-26 21:25:59.110105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.170 [2024-11-26 21:25:59.110248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.170 [2024-11-26 21:25:59.110277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.738 21:25:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.738 21:25:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:41.738 21:25:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:41.738 I/O targets: 00:18:41.738 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:41.738 00:18:41.738 00:18:41.738 CUnit - A unit testing framework for C - Version 2.1-3 00:18:41.738 http://cunit.sourceforge.net/ 00:18:41.738 00:18:41.738 00:18:41.738 Suite: bdevio tests on: raid5f 00:18:41.738 Test: blockdev write read block ...passed 00:18:41.738 Test: blockdev write zeroes read block ...passed 00:18:41.738 Test: blockdev write zeroes read no split ...passed 00:18:41.997 Test: blockdev write zeroes read split ...passed 00:18:41.997 Test: blockdev write zeroes read split partial ...passed 00:18:41.997 Test: blockdev reset ...passed 00:18:41.997 Test: blockdev write read 8 blocks ...passed 00:18:41.997 Test: blockdev write read size > 128k ...passed 00:18:41.997 Test: blockdev write read invalid size ...passed 00:18:41.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.997 Test: blockdev write read max offset ...passed 00:18:41.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.998 Test: blockdev writev readv 8 blocks ...passed 00:18:41.998 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.998 Test: blockdev writev readv block ...passed 00:18:41.998 Test: blockdev writev readv size > 128k ...passed 00:18:41.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.998 Test: blockdev comparev and writev ...passed 00:18:41.998 Test: blockdev nvme passthru rw ...passed 00:18:41.998 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.998 Test: blockdev nvme admin passthru ...passed 00:18:41.998 Test: blockdev copy ...passed 00:18:41.998 00:18:41.998 Run Summary: Type Total Ran Passed Failed Inactive 00:18:41.998 suites 1 1 n/a 0 0 00:18:41.998 tests 23 23 23 0 0 00:18:41.998 asserts 130 130 130 0 n/a 00:18:41.998 00:18:41.998 Elapsed time = 0.621 seconds 00:18:41.998 0 00:18:41.998 21:26:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89913 00:18:41.998 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89913 ']' 00:18:41.998 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89913 00:18:41.998 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89913 00:18:42.257 killing process with pid 89913 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89913' 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89913 00:18:42.257 21:26:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89913 00:18:43.634 ************************************ 00:18:43.634 END TEST bdev_bounds 00:18:43.634 ************************************ 00:18:43.634 21:26:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:43.634 00:18:43.634 real 0m2.940s 00:18:43.634 user 0m7.196s 00:18:43.634 sys 0m0.484s 00:18:43.634 21:26:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.634 21:26:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:43.634 21:26:01 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:43.634 21:26:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:43.634 21:26:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.634 21:26:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:43.634 ************************************ 00:18:43.634 START TEST bdev_nbd 00:18:43.634 ************************************ 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89973 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89973 /var/tmp/spdk-nbd.sock 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89973 ']' 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:43.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.634 21:26:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:43.893 [2024-11-26 21:26:01.820689] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:43.893 [2024-11-26 21:26:01.820914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.893 [2024-11-26 21:26:01.996835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.151 [2024-11-26 21:26:02.128488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:44.718 21:26:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.977 1+0 records in 00:18:44.977 1+0 records out 00:18:44.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423918 s, 9.7 MB/s 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:44.977 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:45.236 { 00:18:45.236 "nbd_device": "/dev/nbd0", 00:18:45.236 "bdev_name": "raid5f" 00:18:45.236 } 00:18:45.236 ]' 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:45.236 { 00:18:45.236 "nbd_device": "/dev/nbd0", 00:18:45.236 "bdev_name": "raid5f" 00:18:45.236 } 00:18:45.236 ]' 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.236 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.495 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:45.753 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.754 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:46.012 /dev/nbd0 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.012 1+0 records in 00:18:46.012 1+0 records out 00:18:46.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384985 s, 10.6 MB/s 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.012 21:26:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:46.271 { 00:18:46.271 "nbd_device": "/dev/nbd0", 00:18:46.271 "bdev_name": "raid5f" 00:18:46.271 } 00:18:46.271 ]' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:46.271 { 00:18:46.271 "nbd_device": "/dev/nbd0", 00:18:46.271 "bdev_name": "raid5f" 00:18:46.271 } 00:18:46.271 ]' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:46.271 256+0 records in 00:18:46.271 256+0 records out 00:18:46.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139558 s, 75.1 MB/s 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:46.271 256+0 records in 00:18:46.271 256+0 records out 00:18:46.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321184 s, 32.6 MB/s 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.271 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.530 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:46.788 21:26:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:47.063 malloc_lvol_verify 00:18:47.063 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:47.063 a04678e0-ba8c-4bc1-b806-b3f940291600 00:18:47.332 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:47.332 77f812b0-d0c7-498d-83dc-7fb66aa7696f 00:18:47.332 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:47.592 /dev/nbd0 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:47.592 mke2fs 1.47.0 (5-Feb-2023) 00:18:47.592 Discarding device blocks: 0/4096 done 00:18:47.592 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:47.592 00:18:47.592 Allocating group tables: 0/1 done 00:18:47.592 Writing inode tables: 0/1 done 00:18:47.592 Creating journal (1024 blocks): done 00:18:47.592 Writing superblocks and filesystem accounting information: 0/1 done 00:18:47.592 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.592 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89973 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89973 ']' 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89973 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89973 00:18:47.854 killing process with pid 89973 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89973' 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89973 00:18:47.854 21:26:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89973 00:18:49.763 21:26:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:49.763 00:18:49.763 real 0m5.695s 00:18:49.763 user 0m7.459s 00:18:49.763 sys 0m1.396s 00:18:49.763 21:26:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.763 21:26:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:49.763 ************************************ 00:18:49.763 END TEST bdev_nbd 00:18:49.763 ************************************ 00:18:49.763 21:26:07 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:49.763 21:26:07 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:18:49.763 21:26:07 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:18:49.763 21:26:07 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:49.763 21:26:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:49.763 21:26:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.763 21:26:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:49.763 ************************************ 00:18:49.763 START TEST bdev_fio 00:18:49.763 ************************************ 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:49.763 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:49.763 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:49.764 ************************************ 00:18:49.764 START TEST bdev_fio_rw_verify 00:18:49.764 ************************************ 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:49.764 21:26:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:49.764 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:49.764 fio-3.35 00:18:49.764 Starting 1 thread 00:19:01.975 00:19:01.975 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90173: Tue Nov 26 21:26:18 2024 00:19:01.975 read: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(476MiB/10001msec) 00:19:01.975 slat (nsec): min=17472, max=77506, avg=19528.19, stdev=2129.70 00:19:01.975 clat (usec): min=11, max=334, avg=132.72, stdev=46.39 00:19:01.975 lat (usec): min=30, max=361, avg=152.25, stdev=46.65 00:19:01.975 clat percentiles (usec): 00:19:01.975 | 50.000th=[ 135], 99.000th=[ 221], 99.900th=[ 251], 99.990th=[ 289], 00:19:01.975 | 99.999th=[ 334] 00:19:01.975 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(493MiB/9873msec); 0 zone resets 00:19:01.975 slat (usec): min=7, max=235, avg=16.28, stdev= 3.60 00:19:01.975 clat (usec): min=59, max=1338, avg=302.00, stdev=40.93 00:19:01.975 lat (usec): min=74, max=1574, avg=318.27, stdev=41.94 00:19:01.975 clat percentiles (usec): 00:19:01.975 | 50.000th=[ 306], 99.000th=[ 379], 99.900th=[ 562], 99.990th=[ 1139], 00:19:01.975 | 99.999th=[ 1303] 00:19:01.975 bw ( KiB/s): min=46896, max=53672, per=98.71%, avg=50461.89, stdev=1600.07, samples=19 00:19:01.975 iops : min=11724, max=13418, avg=12615.47, stdev=400.02, samples=19 00:19:01.975 lat (usec) : 20=0.01%, 50=0.01%, 100=14.64%, 250=39.48%, 500=45.80% 00:19:01.975 lat (usec) : 750=0.05%, 1000=0.02% 00:19:01.975 lat (msec) : 2=0.01% 00:19:01.975 cpu : usr=98.96%, sys=0.37%, ctx=30, majf=0, minf=10002 00:19:01.975 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.975 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.975 issued rwts: total=121889,126184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.975 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.975 00:19:01.975 Run status group 0 (all jobs): 00:19:01.975 READ: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=476MiB (499MB), run=10001-10001msec 00:19:01.975 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=493MiB (517MB), run=9873-9873msec 00:19:02.545 ----------------------------------------------------- 00:19:02.545 Suppressions used: 00:19:02.545 count bytes template 00:19:02.545 1 7 /usr/src/fio/parse.c 00:19:02.545 508 48768 /usr/src/fio/iolog.c 00:19:02.545 1 8 libtcmalloc_minimal.so 00:19:02.545 1 904 libcrypto.so 00:19:02.545 ----------------------------------------------------- 00:19:02.545 00:19:02.545 00:19:02.545 real 0m12.948s 00:19:02.545 user 0m13.086s 00:19:02.545 sys 0m0.730s 00:19:02.545 21:26:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.545 21:26:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:02.545 ************************************ 00:19:02.545 END TEST bdev_fio_rw_verify 00:19:02.545 ************************************ 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7c69e50e-d076-423c-994c-659c7a679435"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7c69e50e-d076-423c-994c-659c7a679435",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7c69e50e-d076-423c-994c-659c7a679435",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6bd505b5-8605-44aa-b457-b86d65651d43",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d3ac02f6-f33a-4ee1-be9b-f4e67621beb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a120e18c-3701-49c1-a1eb-10534552b8b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:02.546 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:02.806 /home/vagrant/spdk_repo/spdk 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:02.806 00:19:02.806 real 0m13.255s 00:19:02.806 user 0m13.218s 00:19:02.806 sys 0m0.875s 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.806 21:26:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:02.806 ************************************ 00:19:02.806 END TEST bdev_fio 00:19:02.806 ************************************ 00:19:02.806 21:26:20 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:02.806 21:26:20 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:02.806 21:26:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:02.806 21:26:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.806 21:26:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:02.806 ************************************ 00:19:02.806 START TEST bdev_verify 00:19:02.806 ************************************ 00:19:02.806 21:26:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:02.806 [2024-11-26 21:26:20.906929] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:02.806 [2024-11-26 21:26:20.907055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90339 ] 00:19:03.065 [2024-11-26 21:26:21.080520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:03.065 [2024-11-26 21:26:21.215687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.065 [2024-11-26 21:26:21.215721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.001 Running I/O for 5 seconds... 00:19:05.872 10384.00 IOPS, 40.56 MiB/s [2024-11-26T21:26:24.963Z] 10446.50 IOPS, 40.81 MiB/s [2024-11-26T21:26:25.899Z] 10502.33 IOPS, 41.02 MiB/s [2024-11-26T21:26:26.835Z] 10506.00 IOPS, 41.04 MiB/s [2024-11-26T21:26:26.835Z] 10508.20 IOPS, 41.05 MiB/s 00:19:08.679 Latency(us) 00:19:08.679 [2024-11-26T21:26:26.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.679 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:08.679 Verification LBA range: start 0x0 length 0x2000 00:19:08.679 raid5f : 5.02 6388.44 24.95 0.00 0.00 30188.09 201.22 22322.31 00:19:08.679 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:08.679 Verification LBA range: start 0x2000 length 0x2000 00:19:08.679 raid5f : 5.03 4117.25 16.08 0.00 0.00 46806.32 219.11 34342.01 00:19:08.679 [2024-11-26T21:26:26.835Z] =================================================================================================================== 00:19:08.679 [2024-11-26T21:26:26.835Z] Total : 10505.70 41.04 0.00 0.00 36702.03 201.22 34342.01 00:19:10.583 00:19:10.583 real 0m7.455s 00:19:10.583 user 0m13.688s 00:19:10.583 sys 0m0.375s 00:19:10.583 21:26:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.583 21:26:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:10.583 ************************************ 00:19:10.583 END TEST bdev_verify 00:19:10.583 ************************************ 00:19:10.583 21:26:28 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:10.583 21:26:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:10.583 21:26:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.583 21:26:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.583 ************************************ 00:19:10.583 START TEST bdev_verify_big_io 00:19:10.583 ************************************ 00:19:10.583 21:26:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:10.583 [2024-11-26 21:26:28.426478] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:10.583 [2024-11-26 21:26:28.426583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90436 ] 00:19:10.583 [2024-11-26 21:26:28.599877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:10.583 [2024-11-26 21:26:28.731486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.583 [2024-11-26 21:26:28.731517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.519 Running I/O for 5 seconds... 00:19:13.392 633.00 IOPS, 39.56 MiB/s [2024-11-26T21:26:32.485Z] 760.00 IOPS, 47.50 MiB/s [2024-11-26T21:26:33.863Z] 739.67 IOPS, 46.23 MiB/s [2024-11-26T21:26:34.811Z] 761.00 IOPS, 47.56 MiB/s [2024-11-26T21:26:34.811Z] 761.60 IOPS, 47.60 MiB/s 00:19:16.655 Latency(us) 00:19:16.655 [2024-11-26T21:26:34.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.655 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:16.655 Verification LBA range: start 0x0 length 0x200 00:19:16.655 raid5f : 5.18 441.21 27.58 0.00 0.00 7288305.82 277.24 315030.69 00:19:16.655 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:16.655 Verification LBA range: start 0x200 length 0x200 00:19:16.655 raid5f : 5.26 337.92 21.12 0.00 0.00 9409982.73 215.53 401114.66 00:19:16.655 [2024-11-26T21:26:34.811Z] =================================================================================================================== 00:19:16.655 [2024-11-26T21:26:34.811Z] Total : 779.13 48.70 0.00 0.00 8216539.47 215.53 401114.66 00:19:18.050 00:19:18.050 real 0m7.708s 00:19:18.050 user 0m14.230s 00:19:18.050 sys 0m0.342s 00:19:18.050 21:26:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.050 21:26:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.050 ************************************ 00:19:18.050 END TEST bdev_verify_big_io 00:19:18.050 ************************************ 00:19:18.050 21:26:36 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:18.050 21:26:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:18.050 21:26:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.050 21:26:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.050 ************************************ 00:19:18.050 START TEST bdev_write_zeroes 00:19:18.050 ************************************ 00:19:18.050 21:26:36 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:18.310 [2024-11-26 21:26:36.212960] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:18.310 [2024-11-26 21:26:36.213088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90536 ] 00:19:18.310 [2024-11-26 21:26:36.385908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.569 [2024-11-26 21:26:36.514016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.137 Running I/O for 1 seconds... 00:19:20.073 29703.00 IOPS, 116.03 MiB/s 00:19:20.073 Latency(us) 00:19:20.073 [2024-11-26T21:26:38.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.073 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:20.073 raid5f : 1.01 29684.94 115.96 0.00 0.00 4300.19 1359.37 5866.76 00:19:20.073 [2024-11-26T21:26:38.229Z] =================================================================================================================== 00:19:20.073 [2024-11-26T21:26:38.229Z] Total : 29684.94 115.96 0.00 0.00 4300.19 1359.37 5866.76 00:19:21.450 00:19:21.450 real 0m3.461s 00:19:21.450 user 0m2.993s 00:19:21.450 sys 0m0.339s 00:19:21.450 21:26:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.450 21:26:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:21.450 ************************************ 00:19:21.450 END TEST bdev_write_zeroes 00:19:21.450 ************************************ 00:19:21.710 21:26:39 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.710 21:26:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:21.710 21:26:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.710 21:26:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.710 ************************************ 00:19:21.710 START TEST bdev_json_nonenclosed 00:19:21.710 ************************************ 00:19:21.710 21:26:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.710 [2024-11-26 21:26:39.760232] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:21.710 [2024-11-26 21:26:39.760369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90589 ] 00:19:21.968 [2024-11-26 21:26:39.939361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.968 [2024-11-26 21:26:40.069080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.968 [2024-11-26 21:26:40.069192] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:21.968 [2024-11-26 21:26:40.069222] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:21.968 [2024-11-26 21:26:40.069233] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:22.228 00:19:22.228 real 0m0.666s 00:19:22.228 user 0m0.400s 00:19:22.228 sys 0m0.161s 00:19:22.228 21:26:40 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.228 21:26:40 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:22.228 ************************************ 00:19:22.228 END TEST bdev_json_nonenclosed 00:19:22.228 ************************************ 00:19:22.489 21:26:40 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:22.489 21:26:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:22.489 21:26:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.489 21:26:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:22.489 ************************************ 00:19:22.489 START TEST bdev_json_nonarray 00:19:22.489 ************************************ 00:19:22.489 21:26:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:22.489 [2024-11-26 21:26:40.494032] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:22.489 [2024-11-26 21:26:40.494158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90620 ] 00:19:22.748 [2024-11-26 21:26:40.668384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.748 [2024-11-26 21:26:40.797711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.748 [2024-11-26 21:26:40.797827] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:22.748 [2024-11-26 21:26:40.797847] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:22.748 [2024-11-26 21:26:40.797866] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:23.008 00:19:23.008 real 0m0.654s 00:19:23.008 user 0m0.411s 00:19:23.008 sys 0m0.137s 00:19:23.008 21:26:41 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.008 21:26:41 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:23.008 ************************************ 00:19:23.008 END TEST bdev_json_nonarray 00:19:23.009 ************************************ 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:23.009 21:26:41 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:23.009 00:19:23.009 real 0m49.777s 00:19:23.009 user 1m6.267s 00:19:23.009 sys 0m5.755s 00:19:23.009 21:26:41 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.009 21:26:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.009 ************************************ 00:19:23.009 END TEST blockdev_raid5f 00:19:23.009 ************************************ 00:19:23.268 21:26:41 -- spdk/autotest.sh@194 -- # uname -s 00:19:23.268 21:26:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:23.268 21:26:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.268 21:26:41 -- common/autotest_common.sh@10 -- # set +x 00:19:23.268 21:26:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:23.268 21:26:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:23.268 21:26:41 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:23.268 21:26:41 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:23.268 21:26:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.268 21:26:41 -- common/autotest_common.sh@10 -- # set +x 00:19:23.268 21:26:41 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:23.268 21:26:41 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:23.268 21:26:41 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:23.268 21:26:41 -- common/autotest_common.sh@10 -- # set +x 00:19:25.809 INFO: APP EXITING 00:19:25.809 INFO: killing all VMs 00:19:25.809 INFO: killing vhost app 00:19:25.809 INFO: EXIT DONE 00:19:26.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.069 Waiting for block devices as requested 00:19:26.069 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.329 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:27.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:27.270 Cleaning 00:19:27.270 Removing: /var/run/dpdk/spdk0/config 00:19:27.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:27.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:27.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:27.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:27.270 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:27.270 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:27.270 Removing: /dev/shm/spdk_tgt_trace.pid56853 00:19:27.270 Removing: /var/run/dpdk/spdk0 00:19:27.270 Removing: /var/run/dpdk/spdk_pid56613 00:19:27.270 Removing: /var/run/dpdk/spdk_pid56853 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57088 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57192 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57248 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57376 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57394 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57604 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57716 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57823 00:19:27.270 Removing: /var/run/dpdk/spdk_pid57945 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58053 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58092 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58129 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58205 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58311 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58747 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58822 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58891 00:19:27.270 Removing: /var/run/dpdk/spdk_pid58912 00:19:27.270 Removing: /var/run/dpdk/spdk_pid59057 00:19:27.270 Removing: /var/run/dpdk/spdk_pid59073 00:19:27.270 Removing: /var/run/dpdk/spdk_pid59222 00:19:27.270 Removing: /var/run/dpdk/spdk_pid59238 00:19:27.270 Removing: /var/run/dpdk/spdk_pid59310 00:19:27.271 Removing: /var/run/dpdk/spdk_pid59333 00:19:27.271 Removing: /var/run/dpdk/spdk_pid59397 00:19:27.271 Removing: /var/run/dpdk/spdk_pid59421 00:19:27.271 Removing: /var/run/dpdk/spdk_pid59618 00:19:27.271 Removing: /var/run/dpdk/spdk_pid59654 00:19:27.271 Removing: /var/run/dpdk/spdk_pid59738 00:19:27.271 Removing: /var/run/dpdk/spdk_pid61075 00:19:27.271 Removing: /var/run/dpdk/spdk_pid61285 00:19:27.271 Removing: /var/run/dpdk/spdk_pid61425 00:19:27.271 Removing: /var/run/dpdk/spdk_pid62064 00:19:27.271 Removing: /var/run/dpdk/spdk_pid62276 00:19:27.271 Removing: /var/run/dpdk/spdk_pid62416 00:19:27.271 Removing: /var/run/dpdk/spdk_pid63059 00:19:27.271 Removing: /var/run/dpdk/spdk_pid63389 00:19:27.271 Removing: /var/run/dpdk/spdk_pid63535 00:19:27.271 Removing: /var/run/dpdk/spdk_pid64920 00:19:27.271 Removing: /var/run/dpdk/spdk_pid65172 00:19:27.271 Removing: /var/run/dpdk/spdk_pid65313 00:19:27.271 Removing: /var/run/dpdk/spdk_pid66694 00:19:27.271 Removing: /var/run/dpdk/spdk_pid66947 00:19:27.531 Removing: /var/run/dpdk/spdk_pid67087 00:19:27.531 Removing: /var/run/dpdk/spdk_pid68472 00:19:27.531 Removing: /var/run/dpdk/spdk_pid68920 00:19:27.531 Removing: /var/run/dpdk/spdk_pid69066 00:19:27.531 Removing: /var/run/dpdk/spdk_pid70540 00:19:27.531 Removing: /var/run/dpdk/spdk_pid70805 00:19:27.531 Removing: /var/run/dpdk/spdk_pid70945 00:19:27.531 Removing: /var/run/dpdk/spdk_pid72429 00:19:27.531 Removing: /var/run/dpdk/spdk_pid72689 00:19:27.531 Removing: /var/run/dpdk/spdk_pid72837 00:19:27.531 Removing: /var/run/dpdk/spdk_pid74325 00:19:27.531 Removing: /var/run/dpdk/spdk_pid74818 00:19:27.531 Removing: /var/run/dpdk/spdk_pid74962 00:19:27.531 Removing: /var/run/dpdk/spdk_pid75106 00:19:27.531 Removing: /var/run/dpdk/spdk_pid75529 00:19:27.531 Removing: /var/run/dpdk/spdk_pid76259 00:19:27.531 Removing: /var/run/dpdk/spdk_pid76636 00:19:27.531 Removing: /var/run/dpdk/spdk_pid77328 00:19:27.531 Removing: /var/run/dpdk/spdk_pid77775 00:19:27.531 Removing: /var/run/dpdk/spdk_pid78537 00:19:27.531 Removing: /var/run/dpdk/spdk_pid78965 00:19:27.531 Removing: /var/run/dpdk/spdk_pid80934 00:19:27.531 Removing: /var/run/dpdk/spdk_pid81373 00:19:27.531 Removing: /var/run/dpdk/spdk_pid81809 00:19:27.531 Removing: /var/run/dpdk/spdk_pid83906 00:19:27.531 Removing: /var/run/dpdk/spdk_pid84392 00:19:27.531 Removing: /var/run/dpdk/spdk_pid84921 00:19:27.531 Removing: /var/run/dpdk/spdk_pid85977 00:19:27.531 Removing: /var/run/dpdk/spdk_pid86302 00:19:27.531 Removing: /var/run/dpdk/spdk_pid87240 00:19:27.531 Removing: /var/run/dpdk/spdk_pid87563 00:19:27.531 Removing: /var/run/dpdk/spdk_pid88501 00:19:27.531 Removing: /var/run/dpdk/spdk_pid88824 00:19:27.531 Removing: /var/run/dpdk/spdk_pid89501 00:19:27.531 Removing: /var/run/dpdk/spdk_pid89791 00:19:27.531 Removing: /var/run/dpdk/spdk_pid89860 00:19:27.531 Removing: /var/run/dpdk/spdk_pid89913 00:19:27.531 Removing: /var/run/dpdk/spdk_pid90157 00:19:27.531 Removing: /var/run/dpdk/spdk_pid90339 00:19:27.531 Removing: /var/run/dpdk/spdk_pid90436 00:19:27.531 Removing: /var/run/dpdk/spdk_pid90536 00:19:27.531 Removing: /var/run/dpdk/spdk_pid90589 00:19:27.531 Removing: /var/run/dpdk/spdk_pid90620 00:19:27.531 Clean 00:19:27.531 21:26:45 -- common/autotest_common.sh@1453 -- # return 0 00:19:27.531 21:26:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:27.531 21:26:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.531 21:26:45 -- common/autotest_common.sh@10 -- # set +x 00:19:27.791 21:26:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:27.791 21:26:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.791 21:26:45 -- common/autotest_common.sh@10 -- # set +x 00:19:27.791 21:26:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:27.791 21:26:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:27.791 21:26:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:27.791 21:26:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:27.791 21:26:45 -- spdk/autotest.sh@398 -- # hostname 00:19:27.791 21:26:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:28.051 geninfo: WARNING: invalid characters removed from testname! 00:19:50.005 21:27:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:51.915 21:27:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:53.823 21:27:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:55.724 21:27:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:57.630 21:27:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:59.540 21:27:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:01.449 21:27:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:01.450 21:27:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:01.450 21:27:19 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:01.450 21:27:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:01.450 21:27:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:01.450 21:27:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:01.450 + [[ -n 5430 ]] 00:20:01.450 + sudo kill 5430 00:20:01.459 [Pipeline] } 00:20:01.475 [Pipeline] // timeout 00:20:01.479 [Pipeline] } 00:20:01.494 [Pipeline] // stage 00:20:01.499 [Pipeline] } 00:20:01.513 [Pipeline] // catchError 00:20:01.522 [Pipeline] stage 00:20:01.524 [Pipeline] { (Stop VM) 00:20:01.538 [Pipeline] sh 00:20:01.876 + vagrant halt 00:20:03.785 ==> default: Halting domain... 00:20:11.929 [Pipeline] sh 00:20:12.216 + vagrant destroy -f 00:20:14.758 ==> default: Removing domain... 00:20:14.772 [Pipeline] sh 00:20:15.057 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:15.067 [Pipeline] } 00:20:15.083 [Pipeline] // stage 00:20:15.090 [Pipeline] } 00:20:15.106 [Pipeline] // dir 00:20:15.113 [Pipeline] } 00:20:15.130 [Pipeline] // wrap 00:20:15.138 [Pipeline] } 00:20:15.152 [Pipeline] // catchError 00:20:15.163 [Pipeline] stage 00:20:15.165 [Pipeline] { (Epilogue) 00:20:15.180 [Pipeline] sh 00:20:15.465 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:19.678 [Pipeline] catchError 00:20:19.680 [Pipeline] { 00:20:19.695 [Pipeline] sh 00:20:19.981 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:19.981 Artifacts sizes are good 00:20:19.991 [Pipeline] } 00:20:20.012 [Pipeline] // catchError 00:20:20.033 [Pipeline] archiveArtifacts 00:20:20.040 Archiving artifacts 00:20:20.144 [Pipeline] cleanWs 00:20:20.157 [WS-CLEANUP] Deleting project workspace... 00:20:20.157 [WS-CLEANUP] Deferred wipeout is used... 00:20:20.164 [WS-CLEANUP] done 00:20:20.166 [Pipeline] } 00:20:20.181 [Pipeline] // stage 00:20:20.185 [Pipeline] } 00:20:20.198 [Pipeline] // node 00:20:20.203 [Pipeline] End of Pipeline 00:20:20.236 Finished: SUCCESS